182 resultados para Renormalization (Physics)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a new STAR measurement of the longitudinal double-spin asymmetry A(LL) for inclusive jet production at midrapidity in polarized p+p collisions at a center-of-mass energy of root s = 200 GeV. The data, which cover jet transverse momenta 5 < p(T) < 30 GeV/c, are substantially more precise than previous measurements. They provide significant new constraints on the gluon spin contribution to the nucleon spin through the comparison to predictions derived from one global fit to polarized deep-inelastic scattering measurements. They provide significant new constraints on the gluon spin contribution to the nucleon spin through the comparison to predictions derived from one global fit to polarized deep-inelastic scattering measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the first spin alignment measurements for the K*(0)(892) and phi(1020) vector mesons produced at midrapidity with transverse momenta up to 5 GeV/c at root s(NN) = 200 GeV at RHIC. The diagonal spin-density matrix elements with respect to the reaction plane in Au+Au collisions are rho(00) = 0.32 +/- 0.04 (stat) +/- 0.09 (syst) for the K*(0) (0.8 < p(T) < 5.0 GeV/c) and rho(00) = 0.34 +/- 0.02 (stat) +/- 0.03 (syst) for the phi (0.4 < p(T) < 5.0 GeV/c) and are constant with transverse momentum and collision centrality. The data are consistent with the unpolarized expectation of 1/3 and thus no evidence is found for the transfer of the orbital angular momentum of the colliding system to the vector-meson spins. Spin alignments for K(*0) and phi in Au+Au collisions were also measured with respect to the particle's production plane. The phi result, rho(00) = 0.41 +/- 0.02 (stat) +/- 0.04 (syst), is consistent with that in p+p collisions, rho(00) = 0.39 +/- 0.03 (stat) +/- 0.06 (syst), also measured in this work. The measurements thus constrain the possible size of polarization phenomena in the production dynamics of vector mesons.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present STAR results on the elliptic flow upsilon(2) Of charged hadrons, strange and multistrange particles from,root s(NN) = 200 GeV Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). The detailed study of the centrality dependence of upsilon(2) over a broad transverse momentum range is presented. Comparisons of different analysis methods are made in order to estimate systematic uncertainties. To discuss the nonflow effect, we have performed the first analysis Of upsilon(2) with the Lee-Yang zero method for K(S)(0) and A. In the relatively low PT region, P(T) <= 2 GeV/c, a scaling with m(T) - m is observed for identified hadrons in each centrality bin studied. However, we do not observe nu 2(p(T))) scaled by the participant eccentricity to be independent of centrality. At higher PT, 2 1 <= PT <= 6 GeV/c, V2 scales with quark number for all hadrons studied. For the multistrange hadron Omega, which does not suffer appreciable hadronic interactions, the values of upsilon(2) are consistent with both m(T) - m scaling at low p(T) and number-of-quark scaling at intermediate p(T). As a function ofcollision centrality, an increase of p(T)-integrated upsilon(2) scaled by the participant eccentricity has been observed, indicating a stronger collective flow in more central Au+Au collisions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on the observed differences in production rates of strange and multistrange baryons in Au+Au collisions at s(NN)=200 GeV compared to p+p interactions at the same energy. The strange baryon yields in Au+Au collisions, when scaled down by the number of participating nucleons, are enhanced relative to those measured in p+p reactions. The enhancement observed increases with the strangeness content of the baryon, and it increases for all strange baryons with collision centrality. The enhancement is qualitatively similar to that observed at the lower collision energy s(NN)=17.3 GeV. The previous observations are for the bulk production, while at intermediate p(T),1 < p(T)< 4 GeV/c, the strange baryons even exceed binary scaling from p+p yields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Photoproduction reactions occur when the electromagnetic field of a relativistic heavy ion interacts with another heavy ion. The STAR Collaboration presents a measurement of rho(0) and direct pi(+)pi(-) photoproduction in ultraperipheral relativistic heavy ion collisions at root s(NN) = 200 GeV. We observe both exclusive photoproduction and photoproduction accompanied by mutual Coulomb excitation. We find a coherent cross section of sigma(AuAu -> Au*Au*rho(0)) = 530 +/- 19(stat.) +/- 57(syst.) mb, in accord with theoretical calculations based on a Glauber approach, but considerably below the predictions of a color dipole model. The rho 0 transverse momentum spectrum (p(T)(2)) is fit by a double exponential curve including both coherent and incoherent coupling to the target nucleus; we find sigma(inc)/sigma(coh) = 0.29 +/- 0.03 (stat.) +/- 0.08 (syst.). The ratio of direct pi(+)pi(-) to rho(0) production is comparable to that observed in gamma(p) collisions at HERA and appears to be independent of photon energy. Finally, the measured rho(0) spin helicity matrix elements agree within errors with the expected s-channel helicity conservation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efficacy of photodynamic therapy (PDT) depends on a variety of parameters: concentration of the photosensitizer at the time of treatment, light wavelength, fluence, fluence rate, availability of oxygen within the illuminated volume, and light distribution in the tissue. Dosimetry in PDT requires the congregation of adequate amounts of light, drug, and tissue oxygen. The adequate dosimetry should be able to predict the extension of the tissue damage. Photosensitizer photobleaching rate depends on the availability of molecular oxygen in the tissue. Based on photosensitizers photobleaching models, high photobleaching has to be associated with high production of singlet oxygen and therefore with higher photodynamic action, resulting in a greater depth of necrosis. The purpose of this work is to show a possible correlation between depth of necrosis and the in vivo photosensitizer (in this case, Photogem (R)) photodegradation during PDT. Such correlation allows possibilities for the development of a real time evaluation of the photodynamic action during PDT application. Experiments were performed in a range of fluence (0-450 J/cm(2)) at a constant fluence rate of 250 mW/cm(2) and applying different illumination times (0-1800 s) to achieve the desired fluence. A quantity was defined (psi) as the product of fluorescence ratio (related to the photosensitizer degradation at the surface) and the observed depth of necrosis. The correlation between depth of necrosis and surface fluorescence signal is expressed in psi and could allow, in principle, a noninvasive monitoring of PDT effects during treatment. High degree of correlation is observed and a simple mathematical model to justify the results is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ion channels are pores formed by proteins and responsible for carrying ion fluxes through cellular membranes. The ion channels can assume conformational states thereby controlling ion flow. Physically, the conformational transitions from one state to another are associated with energy barriers between them and are dependent on stimulus, such as, electrical field, ligands, second messengers, etc. Several models have been proposed to describe the kinetics of ion channels. The classical Markovian model assumes that a future transition is independent of the time that the ion channel stayed in a previous state. Others models as the fractal and the chaotic assume that the rate of transitions between the states depend on the time that the ionic channel stayed in a previous state. For the calcium activated potassium channels of Leydig cells the R/S Hurst analysis has indicated that the channels are long-term correlated with a Hurst coefficient H around 0.7, showing a persistent memory in this kinetic. Here, we applied the R/S analysis to the opening and closing dwell time series obtained from simulated data from a chaotic model proposed by L. Liebovitch and T. Toth [J. Theor. Biol. 148, 243 (1991)] and we show that this chaotic model or any model that treats the set of channel openings and closings as independent events is inadequate to describe the long-term correlation (memory) already described for the experimental data. (C) 2008 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study the behavior of immune memory against antigenic mutation. Using a dynamic model proposed by one of the authors in a previous study (A. de Castro [Phys. J. Appl. Phys. 33, 147 (2006) and Simul. Mod. Pract. Theory. 15, 831 (2007)]), we have performed simulations of several inoculations, where in each virtual sample the viral population undergoes mutations. Our results suggest that the sustainability of the immunizations is dependent on viral variability and that the memory lifetimes are not random, what contradicts what was suggested by Tarlinton et al. [Curr. Opin. Immunol. 20, 162 (2008)]. We show that what may cause an apparent random behavior of the immune memory is the antigenic variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present study, a finite element model of a half-sectioned molar tooth was developed in order to understand the thermal behavior of dental hard tissues (both enamel and dentin) under laser irradiation. The model was validated by comparing it with an in vitro experiment where a sound molar tooth was irradiated by an Er,Cr:YSGG pulsed laser. The numerical tooth model was conceived to simulate the in vitro experiment, reproducing the dimensions and physical conditions of the typical molar sound tooth, considering laser energy absorption and calculating the heat transfer through the dental tissues in three dimensions. The numerical assay considered the same three laser energy densities at the same wavelength (2.79 mu m) used in the experiment. A thermographic camera was used to perform the in vitro experiment, in which an Er, Cr: YSGG laser (2.79 mu m) was used to irradiate tooth samples and the infrared images obtained were stored and analyzed. The temperature increments in both the finite element model and the in vitro experiment were compared. The distribution of temperature inside the tooth versus time plotted for two critical points showed a relatively good agreement between the results of the experiment and model. The three dimensional model allows one to understand how the heat propagates through the dentin and enamel and to relate the amount of energy applied, width of the laser pulses, and temperature inside the tooth. (C) 2008 American Institute of Physics. [DOI: 10.1063/1.2953526]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The existence of a classical limit describing the interacting particles in a second-quantized theory of identical particles with bosonic symmetry is proved. This limit exists in addition to the previously established classical limit with a classical field behavior, showing that the limit h -> 0 of the theory is not unique. An analogous result is valid for a free massive scalar field: two distinct classical limits are proved to exist, describing a system of particles or a classical field. The introduction of local operators in order to represent kinematical properties of interest is shown to break the permutation symmetry under some localizability conditions, allowing the study of individual particle properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS). CATT-BRAMS is an on-line transport model fully consistent with the simulated atmospheric dynamics. Emission sources from biomass burning and urban-industrial-vehicular activities for trace gases and from biomass burning aerosol particles are obtained from several published datasets and remote sensing information. The tracer and aerosol mass concentration prognostics include the effects of sub-grid scale turbulence in the planetary boundary layer, convective transport by shallow and deep moist convection, wet and dry deposition, and plume rise associated with vegetation fires in addition to the grid scale transport. The radiation parameterization takes into account the interaction between the simulated biomass burning aerosol particles and short and long wave radiation. The atmospheric model BRAMS is based on the Regional Atmospheric Modeling System (RAMS), with several improvements associated with cumulus convection representation, soil moisture initialization and surface scheme tuned for the tropics, among others. In this paper the CATT-BRAMS model is used to simulate carbon monoxide and particulate material (PM(2.5)) surface fluxes and atmospheric transport during the 2002 LBA field campaigns, conducted during the transition from the dry to wet season in the southwest Amazon Basin. Model evaluation is addressed with comparisons between model results and near surface, radiosondes and airborne measurements performed during the field campaign, as well as remote sensing derived products. We show the matching of emissions strengths to observed carbon monoxide in the LBA campaign. A relatively good comparison to the MOPITT data, in spite of the fact that MOPITT a priori assumptions imply several difficulties, is also obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Creation of cold dark matter (CCDM) can macroscopically be described by a negative pressure, and, therefore, the mechanism is capable to accelerate the Universe, without the need of an additional dark energy component. In this framework, we discuss the evolution of perturbations by considering a Neo-Newtonian approach where, unlike in the standard Newtonian cosmology, the fluid pressure is taken into account even in the homogeneous and isotropic background equations (Lima, Zanchin, and Brandenberger, MNRAS 291, L1, 1997). The evolution of the density contrast is calculated in the linear approximation and compared to the one predicted by the Lambda CDM model. The difference between the CCDM and Lambda CDM predictions at the perturbative level is quantified by using three different statistical methods, namely: a simple chi(2)-analysis in the relevant space parameter, a Bayesian statistical inference, and, finally, a Kolmogorov-Smirnov test. We find that under certain circumstances, the CCDM scenario analyzed here predicts an overall dynamics (including Hubble flow and matter fluctuation field) which fully recovers that of the traditional cosmic concordance model. Our basic conclusion is that such a reduction of the dark sector provides a viable alternative description to the accelerating Lambda CDM cosmology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is possible that a system composed of up, down, and strange quarks exists as the true ground state of nuclear matter at high densities and low temperatures. This exotic plasma, called strange quark matter (SQM), seems to be even more favorable energetically if quarks are in a superconducting state, the so-called color-flavor locked state. Here we present calculations made on the basis of the MIT bag model, considering the influence of finite temperature on the allowed parameters characterizing the system for stability of bulk SQM (the so-called stability windows) and also for strangelets, small lumps of SQM, both in the color-flavor locking scenario. We compare these results with the unpaired SQM and also briefly discuss some astrophysical implications of them. Also, the issue of the strangelet's electric charge is discussed. The effects of dynamical screening, though important for nonpaired SQM strangelets, are not relevant when considering pairing among all three flavors and colors of quarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence of a possible nonzero chemical potential mu on the nature of dark energy is investigated by assuming that the dark energy is a relativistic perfect simple fluid obeying the equation of state, p=omega rho (omega < 0, constant). The entropy condition, S >= 0, implies that the possible values of omega are heavily dependent on the magnitude, as well as on the sign of the chemical potential. For mu > 0, the omega parameter must be greater than -1 (vacuum is forbidden) while for mu < 0 not only the vacuum but even a phantomlike behavior (omega <-1) is allowed. In any case, the ratio between the chemical potential and temperature remains constant, that is, mu/T=mu(0)/T(0). Assuming that the dark energy constituents have either a bosonic or fermionic nature, the general form of the spectrum is also proposed. For bosons mu is always negative and the extended Wien's law allows only a dark component with omega <-1/2, which includes vacuum and the phantomlike cases. The same happens in the fermionic branch for mu < 0. However, fermionic particles with mu > 0 are permitted only if -1