81 resultados para statistical physics
Resumo:
Creation of cold dark matter (CCDM) can macroscopically be described by a negative pressure, and, therefore, the mechanism is capable to accelerate the Universe, without the need of an additional dark energy component. In this framework, we discuss the evolution of perturbations by considering a Neo-Newtonian approach where, unlike in the standard Newtonian cosmology, the fluid pressure is taken into account even in the homogeneous and isotropic background equations (Lima, Zanchin, and Brandenberger, MNRAS 291, L1, 1997). The evolution of the density contrast is calculated in the linear approximation and compared to the one predicted by the Lambda CDM model. The difference between the CCDM and Lambda CDM predictions at the perturbative level is quantified by using three different statistical methods, namely: a simple chi(2)-analysis in the relevant space parameter, a Bayesian statistical inference, and, finally, a Kolmogorov-Smirnov test. We find that under certain circumstances, the CCDM scenario analyzed here predicts an overall dynamics (including Hubble flow and matter fluctuation field) which fully recovers that of the traditional cosmic concordance model. Our basic conclusion is that such a reduction of the dark sector provides a viable alternative description to the accelerating Lambda CDM cosmology.
Resumo:
The influence of a possible nonzero chemical potential mu on the nature of dark energy is investigated by assuming that the dark energy is a relativistic perfect simple fluid obeying the equation of state, p=omega rho (omega < 0, constant). The entropy condition, S >= 0, implies that the possible values of omega are heavily dependent on the magnitude, as well as on the sign of the chemical potential. For mu > 0, the omega parameter must be greater than -1 (vacuum is forbidden) while for mu < 0 not only the vacuum but even a phantomlike behavior (omega <-1) is allowed. In any case, the ratio between the chemical potential and temperature remains constant, that is, mu/T=mu(0)/T(0). Assuming that the dark energy constituents have either a bosonic or fermionic nature, the general form of the spectrum is also proposed. For bosons mu is always negative and the extended Wien's law allows only a dark component with omega <-1/2, which includes vacuum and the phantomlike cases. The same happens in the fermionic branch for mu < 0. However, fermionic particles with mu > 0 are permitted only if -1
Resumo:
A component of dark energy has been recently proposed to explain the current acceleration of the Universe. Unless some unknown symmetry in Nature prevents or suppresses it, such a field may interact with the pressureless component of dark matter, giving rise to the so-called models of coupled quintessence. In this paper we propose a new cosmological scenario where radiation and baryons are conserved, while the dark energy component is decaying into cold dark matter. The dilution of cold dark matter particles, attenuated with respect to the usual a(-3) scaling due to the interacting process, is characterized by a positive parameter epsilon, whereas the dark energy satisfies the equation of state p(x) = omega rho(x) (omega < 0). We carry out a joint statistical analysis involving recent observations from type Ia supernovae, baryon acoustic oscillation peak, and cosmic microwave background shift parameter to check the observational viability of the coupled quintessence scenario here proposed.
Resumo:
The mass function of cluster-size halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat Lambda CDM model. The comparison with Lambda CDM is performed by a two-step process. First, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using supernovae type Ia, the cosmic microwave background shift parameter, and baryon acoustic oscillations. Apart from a braneworld inspired cosmology, it is found that the derived Hubble relation of the remaining models reproduces the Lambda CDM results approximately with the same degree of statistical confidence. Second, in order to attempt to distinguish the different dark energy models from the expectations of Lambda CDM, we analyze the predicted cluster-size halo redshift distribution on the basis of two future cluster surveys: (i) an X-ray survey based on the eROSITA satellite, and (ii) a Sunayev-Zeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the Lambda CDM cosmology, while the predictions of 4 models are statistically equivalent to those of the Lambda CDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggests that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.
Resumo:
Context. The detailed chemical abundances of extremely metal-poor (EMP) stars are key guides to understanding the early chemical evolution of the Galaxy. Most existing data, however, treat giant stars that may have experienced internal mixing later. Aims. We aim to compare the results for giants with new, accurate abundances for all observable elements in 18 EMP turno. stars. Methods. VLT/UVES spectra at R similar to 45 000 and S/N similar to 130 per pixel (lambda lambda 330-1000 nm) are analysed with OSMARCS model atmospheres and the TURBOSPECTRUM code to derive abundances for C, Mg, Si, Ca, Sc, Ti, Cr, Mn, Co, Ni, Zn, Sr, and Ba. Results. For Ca, Ni, Sr, and Ba, we find excellent consistency with our earlier sample of EMP giants, at all metallicities. However, our abundances of C, Sc, Ti, Cr, Mn and Co are similar to 0.2 dex larger than in giants of similar metallicity. Mg and Si abundances are similar to 0.2 dex lower (the giant [Mg/Fe] values are slightly revised), while Zn is again similar to 0.4 dex higher than in giants of similar [Fe/H] (6 stars only). Conclusions. For C, the dwarf/giant discrepancy could possibly have an astrophysical cause, but for the other elements it must arise from shortcomings in the analysis. Approximate computations of granulation (3D) effects yield smaller corrections for giants than for dwarfs, but suggest that this is an unlikely explanation, except perhaps for C, Cr, and Mn. NLTE computations for Na and Al provide consistent abundances between dwarfs and giants, unlike the LTE results, and would be highly desirable for the other discrepant elements as well. Meanwhile, we recommend using the giant abundances as reference data for Galactic chemical evolution models.
Resumo:
In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests. (c) 2010 American Institute of Physics. [doi: 10.1063/1.3487516]
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
The mechanism of incoherent pi(0) and eta photoproduction from complex nuclei is investigated from 4 to 12 GeV with an extended version of the multicollisional Monte Carlo (MCMC) intranuclear cascade model. The calculations take into account the elementary photoproduction amplitudes via a Regge model and the nuclear effects of photon shadowing, Pauli blocking, and meson-nucleus final-state interactions. The results for pi(0) photoproduction reproduced for the first time the magnitude and energy dependence of the measured rations sigma(gamma A)/sigma(gamma N) for several nuclei (Be, C, Al, Cu, Ag, and Pb) from a Cornell experiment. The results for eta photoproduction fitted the inelastic background in Cornell's yields remarkably well, which is clearly not isotropic as previously considered in Cornell's analysis. With this constraint for the background, the eta -> gamma gamma. decay width was extracted using the Primakoff method, combining Be and Cu data [Gamma(eta ->gamma gamma) = 0.476(62) keV] and using Be data only [Gamma(eta ->gamma gamma) = 0.512(90) keV]; where the errors are only statistical. These results are in sharp contrast (similar to 50-60%) with the value reported by the Cornell group [Gamma(eta ->gamma gamma). = 0.324(46) keV] and in line with the Particle Data Group average of 0.510(26) keV.
Resumo:
Incoherent eta photoproduction in nuclei is evaluated at forward angles within 4 to 9 GeV using a multiple scattering Monte Carlo cascade calculation with full eta-nucleus final-state interactions. The Primakoff, nuclear coherent and nuclear incoherent components of the cross sections fit remarkably well previous measurements for Be and Cu from Cornell, suggesting a destructive interference between the Coulomb and nuclear coherent amplitudes for Cu. The inelastic background of the data is consistently attributed to the nuclear incoherent part, which is clearly not isotropic as previously considered in Cornell's analysis. The respective Primakoff cross sections from Be and Cu give Gamma(eta ->gamma gamma)=0.476(62) keV, where the quoted error is only statistical. This result is consistent with the Particle Data Group average of 0.510(26) keV and in sharp contrast (similar to 50%) with the value of 0.324(46) keV obtained at Cornell.
Resumo:
We present measurements of J/psi yields in d + Au collisions at root S(NN) = 200 GeV recorded by the PHENIX experiment and compare them with yields in p + p collisions at the same energy per nucleon-nucleon collision. The measurements cover a large kinematic range in J/psi rapidity (-2.2 < y < 2.4) with high statistical precision and are compared with two theoretical models: one with nuclear shadowing combined with final state breakup and one with coherent gluon saturation effects. In order to remove model dependent systematic uncertainties we also compare the data to a simple geometric model. The forward rapidity data are inconsistent with nuclear modifications that are linear or exponential in the density weighted longitudinal thickness, such as those from the final state breakup of the bound state.
Resumo:
The PHENIX experiment at the Relativistic Heavy Ion Collider has measured the invariant differential cross section for production of K(S)(0), omega, eta', and phi mesons in p + p collisions at root s 200 GeV. Measurements of omega and phi production in different decay channels give consistent results. New results for the omega are in agreement with previously published data and extend the measured p(T) coverage. The spectral shapes of all hadron transverse momentum distributions measured by PHENIX are well described by a Tsallis distribution functional form with only two parameters, n and T, determining the high-p(T) and characterizing the low-p(T) regions of the spectra, respectively. The values of these parameters are very similar for all analyzed meson spectra, but with a lower parameter T extracted for protons. The integrated invariant cross sections calculated from the fitted distributions are found to be consistent with existing measurements and with statistical model predictions.
Resumo:
New measurements by the PHENIX experiment at the Relativistic Heavy Ion Collider for. production at midrapidity as a function of transverse momentum ((PT)) and collision centrality in root s(NN) = 200 GeV Au + Au and p + p collisions are presented. They indicate nuclear modification factors (R(AA)) which are similar in both magnitude and trend to those found in earlier pi(0) measurements. Linear fits to R(AA) as a function of (PT) in 5-20 GeV/c show that the slope is consistent with zero within two standard deviations at all centralities, although a slow rise cannot be excluded. Having different statistical and systematic uncertainties, the pi(0) and eta measurements are complementary at high (PT); thus, along with the extended (PT) range of these data they can provide additional constraints for theoretical modeling and the extraction of transport properties.
Resumo:
We report the observation at the Relativistic Heavy Ion Collider of suppression of back-to-back correlations in the direct photon+jet channel in Au+Au relative to p+p collisions. Two-particle correlations of direct photon triggers with associated hadrons are obtained by statistical subtraction of the decay photon-hadron (gamma-h) background. The initial momentum of the away-side parton is tightly constrained, because the parton-photon pair exactly balance in momentum at leading order in perturbative quantum chromodynamics, making such correlations a powerful probe of the in-medium parton energy loss. The away-side nuclear suppression factor, I(AA), in central Au+Au collisions, is 0.32 +/- 0.12(stat)+/- 0.09(syst) for hadrons of 3 < p(T)(h)< 5 in coincidence with photons of 5 < p(T)(gamma)< 15 GeV/c. The suppression is comparable to that observed for high-p(T) single hadrons and dihadrons. The direct photon associated yields in p+p collisions scale approximately with the momentum balance, z(T)equivalent to p(T)(h)/p(T)(gamma), as expected for a measurement of the away-side parton fragmentation function. We compare to Au+Au collisions for which the momentum balance dependence of the nuclear modification should be sensitive to the path-length dependence of parton energy loss.
Resumo:
The PHENIX experiment presents results from the RHIC 2006 run with polarized p + p collisions at root s = 62.4 GeV, for inclusive pi(0) production at midrapidity. Unpolarized cross section results are measured for transverse momenta p(T) = 0.5 to 7 GeV/c. Next-to-leading order perturbative quantum chromodynamics calculations are compared with the data, and while the calculations are consistent with the measurements, next-to-leading logarithmic corrections improve the agreement. Double helicity asymmetries A(LL) are presented for p(T) = 1 to 4 GeV/c and probe the higher range of Bjorken x of the gluon (x(g)) with better statistical precision than our previous measurements at root s = 200 GeV. These measurements are sensitive to the gluon polarization in the proton for 0.06 < x(g) < 0.4.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.