948 resultados para cosmological parameters from CMBR


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transition redshift (deceleration/acceleration) is discussed by expanding the deceleration parameter to first order around its present value. A detailed study is carried out by considering two different parametrizations, q = q(0) + q(1)z and q = q(0) + q(1)z(1 + z)(-1), and the associated free parameters (q(0), q(1)) are constrained by three different supernovae (SNe) samples. A previous analysis by Riess et al. using the first expansion is slightly improved and confirmed in light of their recent data (Gold07 sample). However, by fitting the model with the Supernova Legacy Survey (SNLS) type Ia sample, we find that the best fit to the redshift transition is z(t) = 0.61, instead of z(t) = 0.46 as derived by the High-z Supernovae Search (HZSNS) team. This result based in the SNLS sample is also in good agreement with the sample of Davis et al., z(t) = 0.60(-0.11)(+0.28) (1 sigma). Such results are in line with some independent analyses and accommodate more easily the concordance flat model (Lambda CDM). For both parametrizations, the three SNe Ia samples considered favour recent acceleration and past deceleration with a high degree of statistical confidence level. All the kinematic results presented here depend neither on the validity of general relativity nor on the matter-energy contents of the Universe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters of galaxies are the most impressive gravitationally-bound systems in the universe, and their abundance (the cluster mass function) is an important statistic to probe the matter density parameter (Omega(m)) and the amplitude of density fluctuations (sigma(8)). The cluster mass function is usually described in terms of the Press-Schecther (PS) formalism where the primordial density fluctuations are assumed to be a Gaussian random field. In previous works we have proposed a non-Gaussian analytical extension of the PS approach with basis on the q-power law distribution (PL) of the nonextensive kinetic theory. In this paper, by applying the PL distribution to fit the observational mass function data from X-ray highest flux-limited sample (HIFLUGCS), we find a strong degeneracy among the cosmic parameters, sigma(8), Omega(m) and the q parameter from the PL distribution. A joint analysis involving recent observations from baryon acoustic oscillation (BAO) peak and Cosmic Microwave Background (CMB) shift parameter is carried out in order to break these degeneracy and better constrain the physically relevant parameters. The present results suggest that the next generation of cluster surveys will be able to probe the quantities of cosmological interest (sigma(8), Omega(m)) and the underlying cluster physics quantified by the q-parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we proposed a composite depth of penetration (DOP) approach to excluding bottom reflectance in mapping water quality parameters from Landsat thematic mapper (TM) data in the shallow coastal zone of Moreton Bay, Queensland, Australia. Three DOPs were calculated from TM1, TM2 and TM3, in conjunction with bathymetric data, at an accuracy ranging from +/-5% to +/-23%. These depths were used to segment the image into four DOP zones. Sixteen in situ water samples were collected concurrently with the recording of the satellite image. These samples were used to establish regression models for total suspended sediment (TSS) concentration and Secchi depth with respect to a particular DOP zone. Containing identical bands and their transformations for both parameters, the models are linear for TSS concentration, logarithmic for Secchi depth. Based on these models, TSS concentration and Secchi depth were mapped from the satellite image in respective DOP zones. Their mapped patterns are consistent with the in situ observed ones. Spatially, overestimation and underestimation of the parameters are restricted to localised areas but related to the absolute value of the parameters. The mapping was accomplished more accurately using multiple DOP zones than using a single zone in shallower areas. The composite DOP approach enables the mapping to be extended to areas as shallow as <3 m. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to describe wandering using new parameters and to evaluate parameters as a function of cognitive impairment and mobility. Forty-four wanderers in long-term care settings were videotaped 12 times. Rate and duration of wandering episodes were plotted and used to derive parameters from values above and below case medians, proportion of hours wandering, and time of day. Participants wandered during 47% of observations; on average, the hourly rate was 4.3 episodes, the peak hourly rate was 18 episodes, and the peak hourly duration was 19.9 minutes. Mini-Mental State Examination (MMSE) scores was negatively correlated with overall duration and number of observations during which duration exceeded 15 minutes per hour, was positively correlated with number of observations without wandering, and was not significantly correlated with rate-related parameters. Mobility correlated positively with rate and duration parameters. Interaction of MMSE score and mobility was the strongest predictor of wandering duration. Parameters derived from repeated measures provide a new view of daytime wandering and insight into relationships between MMSE score and mobility status with specific parameters of wandering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Port land uses are subjected to unique anthropogenic activities compared to typical urban land uses. This uniqueness results in distinctive stormwater quality characteristics. Such distinction in stormwater quality has made conventional approaches used for pollutant load estimations inaccurate. This is also the case for the Port of Brisbane (PoB). The study discussed in the paper was conducted to estimate the pollutant contributions from Port specific land uses at PoB. For estimation, software modules embedded in Mike URBAN were used. An innovative approach was adopted in modelling where the conventional model calibration step was not needed to be performed to generate suitable site specific parameters. Instead, equations and site specific parameters that replicate pollutant build-up and wash-off were generated from an extensive field investigation. Models were simulated incorporating site specific parameters from six different Port specific land uses and rainfall events from three representative years. Outcomes of the modelling exercise were used to identify the distinct pollutant contributions from different Port land uses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Research on engineering design is a core area of concern within engineering education and a fundamental understanding of how engineering students approach and undertake design is necessary in order to develop effective design models and pedagogies. Understanding the factors related to design experiences in education and how they affect student practice can help educators as well as designers to leverage these factors as part of the design process. PURPOSE This study investigated the design practices of first-year engineering students’ and their experiences with a first-year engineering course design project. The research questions that guided the investigation were: 1. From a student perspective, what design parameters or criteria are most important? 2. How does this perspective impact subsequent student design practice throughout the design process? DESIGN/METHOD The authors employed qualitative multi-case study methods (Miles & Huberman, 1994) in order to the answer the research questions. Participant teams were observed and video recorded during team design meetings in which they researched the background for the design problem, brainstormed and sketched possible solutions, as well as built prototypes and final models of their design solutions as part of a course design project. Analysis focused on explanation building (Yin, 2009) and utilized within-case and cross-case analysis (Miles & Huberman, 1994). RESULTS We found that students focused disproportionally on the functional parameter, i.e. the physical implementation of their solution, and the possible/applicable parameter, i.e. a possible and applicable solution that benefited the user, in comparison to other given parameters such as safety and innovativeness. In addition, we found that individual teams focused on the functional and possible/ applicable parameters in early design phases such as brainstorming/ ideation and sketching. When prompted to discuss these non-salient parameters (from the student perspective) in the final design report, student design teams often used a post-hoc justification to support how the final designs fit the parameters that they did not initially consider. CONCLUSIONS This study suggests is that student design teams become fixated on (and consequently prioritize) certain parameters they interpret as important because they feel these parameters were described more explicitly in terms how they were met and assessed. Students fail to consider other parameters, perceived to be less directly assessable, unless prompted to do so. Failure to consider other parameters in the early design phases subsequently affects their approach in design phases as well. Case studies examining students’ study strategies within three Australian Universities illustrate similarities with some student approaches to design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, it is not easy to carry out tests to identify modal parameters from existing railway bridges because of the testing conditions and complicated nature of civil structures. A six year (2007-2012) research program was conducted to monitor a group of 25 railway bridges. One of the tasks was to devise guidelines for identifying their modal parameters. This paper presents the experience acquired from such identification. The modal analysis of four representative bridges of this group is reported, which include B5, B15, B20 and B58A, crossing the Carajás railway in northern Brazil using three different excitations sources: drop weight, free vibration after train passage, and ambient conditions. To extract the dynamic parameters from the recorded data, Stochastic Subspace Identification and Frequency Domain Decomposition methods were used. Finite-element models were constructed to facilitate the dynamic measurements. The results show good agreement between the measured and computed natural frequencies and mode shapes. The findings provide some guidelines on methods of excitation, record length of time, methods of modal analysis including the use of projected channel and harmonic detection, helping researchers and maintenance teams obtain good dynamic characteristics from measurement data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider estimation of mortality rates and growth parameters from length-frequency data of a fish stock and derive the underlying length distribution of the population and the catch when there is individual variability in the von Bertalanffy growth parameter L-infinity. The model is flexible enough to accommodate 1) any recruitment pattern as a function of both time and length, 2) length-specific selectivity, and 3) varying fishing effort over time. The maximum likelihood method gives consistent estimates, provided the underlying distribution for individual variation in growth is correctly specified. Simulation results indicate that our method is reasonably robust to violations in the assumptions. The method is applied to tiger prawn data (Penaeus semisulcatus) to obtain estimates of natural and fishing mortality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the inverse problem associated with the propagation of the field autocorrelation of light through a highly scattering object like tissue. In the first part of the work, we reconstructed the optical absorption coefficient mu(u) and particle diffusion coefficient D-B from simulated measurements which are integrals of a quantity computed from the measured intensity and intensity autocorrelation g(2)(tau) at the boundary. In the second part we recover the mean square displacement (MSD) distribution of particles in an inhomogeneous object from the sampled g(2)(tau) measure on the boundary. From the MSD, we compute the storage and loss moduli distributions in the object. We have devised computationally easy methods to construct the sensitivity matrices which are used in the iterative reconstruction algorithms for recovering these parameters from the measurements. The results of the reconstruction of mu(a), D-B, MSD and the viscoelastic parameters, which are presented, show reasonable good position and quantitative accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a noniterative method for recovering optical absorption coefficient distribution from the absorbed energy map reconstructed using simulated and noisy boundary pressure measurements. The source reconstruction problem is first solved for the absorbed energy map corresponding to single- and multiple-source illuminations from the side of the imaging plane. It is shown that the absorbed energy map and the absorption coefficient distribution, recovered from the single-source illumination with a large variation in photon flux distribution, have signal-to-noise ratios comparable to those of the reconstructed parameters from a more uniform photon density distribution corresponding to multiple-source illuminations. The absorbed energy map is input as absorption coefficient times photon flux in the time-independent diffusion equation (DE) governing photon transport to recover the photon flux in a single step. The recovered photon flux is used to compute the optical absorption coefficient distribution from the absorbed energy map. In the absence of experimental data, we obtain the boundary measurements through Monte Carlo simulations, and we attempt to address the possible limitations of the DE model in the overall reconstruction procedure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.