67 resultados para Normalization constraint
Resumo:
In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, eta = D(L)(z)(1 + z)(-2)/D(A)(z) = 1, where D(L) and D(A) are, respectively, the luminosity and angular diameter distances. For D(L) we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D(A) distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen to coincide with the ones of the associated galaxy cluster sample (Delta z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D(A)(z) approximate to D(L)(z), we have tested the DD relation by assuming that. is a function of the redshift parameterized by two different expressions: eta(z) = 1 + eta(0)z and eta(z) = 1 +eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation (eta(0) = 0). In the best scenario (linear parameterization), we obtain eta(0) = -0.28(-0.44)(+0.44) (2 sigma, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is eta(0) = -0.42(-0.34)(+0.34) (3 sigma, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.
Resumo:
We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] less than or similar to -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of similar to 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low- metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued.
Resumo:
The mechanism of incoherent pi(0) and eta photoproduction from complex nuclei is investigated from 4 to 12 GeV with an extended version of the multicollisional Monte Carlo (MCMC) intranuclear cascade model. The calculations take into account the elementary photoproduction amplitudes via a Regge model and the nuclear effects of photon shadowing, Pauli blocking, and meson-nucleus final-state interactions. The results for pi(0) photoproduction reproduced for the first time the magnitude and energy dependence of the measured rations sigma(gamma A)/sigma(gamma N) for several nuclei (Be, C, Al, Cu, Ag, and Pb) from a Cornell experiment. The results for eta photoproduction fitted the inelastic background in Cornell's yields remarkably well, which is clearly not isotropic as previously considered in Cornell's analysis. With this constraint for the background, the eta -> gamma gamma. decay width was extracted using the Primakoff method, combining Be and Cu data [Gamma(eta ->gamma gamma) = 0.476(62) keV] and using Be data only [Gamma(eta ->gamma gamma) = 0.512(90) keV]; where the errors are only statistical. These results are in sharp contrast (similar to 50-60%) with the value reported by the Cornell group [Gamma(eta ->gamma gamma). = 0.324(46) keV] and in line with the Particle Data Group average of 0.510(26) keV.
Resumo:
The double helicity asymmetry in neutral pion production for p(T) = 1 to 12 GeV/c was measured with the PHENIX experiment to access the gluon-spin contribution, Delta G, to the proton spin. Measured asymmetries are consistent with zero, and at a theory scale of mu 2 = 4 GeV(2) a next to leading order QCD analysis gives Delta G([0.02,0.3]) = 0.2, with a constraint of -0.7 < Delta G([0.02,0.3]) < 0.5 at Delta chi(2) = 9 (similar to 3 sigma) for the sampled gluon momentum fraction (x) range, 0.02 to 0.3. The results are obtained using predictions for the measured asymmetries generated from four representative fits to polarized deep inelastic scattering data. We also consider the dependence of the Delta G constraint on the choice of the theoretical scale, a dominant uncertainty in these predictions.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.
Resumo:
We investigate bouncing solutions in the framework of the nonsingular gravity model of Brandenberger, Mukhanov and Sornborger. We show that a spatially flat universe filled with ordinary matter undergoing a phase of contraction reaches a stage of minimal expansion factor before bouncing in a regular way to reach the expanding phase. The expansion can be connected to the usual radiation-and matter-dominated epochs before reaching a final expanding de Sitter phase. In general relativity (GR), a bounce can only take place provided that the spatial sections are positively curved, a fact that has been shown to translate into a constraint on the characteristic duration of the bounce. In our model, on the other hand, a bounce can occur also in the absence of spatial curvature, which means that the time scale for the bounce can be made arbitrarily short or long. The implication is that constraints on the bounce characteristic time obtained in GR rely heavily on the assumed theory of gravity. Although the model we investigate is fourth order in the derivatives of the metric (and therefore unstable vis-a-vis the perturbations), this generic bounce dynamics should extend to string-motivated nonsingular models which can accommodate a spatially flat bounce.
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
The STAR Collaboration at the Relativistic Heavy Ion Collider presents a systematic study of high-transverse-momentum charged-di-hadron correlations at small azimuthal pair separation Delta phi in d+Au and central Au+Au collisions at s(NN)=200 GeV. Significant correlated yield for pairs with large longitudinal separation Delta eta is observed in central Au+Au collisions, in contrast to d+Au collisions. The associated yield distribution in Delta eta x Delta phi can be decomposed into a narrow jet-like peak at small angular separation which has a similar shape to that found in d+Au collisions, and a component that is narrow in Delta phi and depends only weakly on Delta eta, the ""ridge."" Using two systematically independent determinations of the background normalization and shape, finite ridge yield is found to persist for trigger p(t)>6 GeV/c, indicating that it is correlated with jet production. The transverse-momentum spectrum of hadrons comprising the ridge is found to be similar to that of bulk particle production in the measured range (2 < p(t)< 4 GeV/c).
Resumo:
We prove that for any a-mixing stationary process the hitting time of any n-string A(n) converges, when suitably normalized, to an exponential law. We identify the normalization constant lambda(A(n)). A similar statement holds also for the return time. To establish this result we prove two other results of independent interest. First, we show a relation between the rescaled hitting time and the rescaled return time, generalizing a theorem of Haydn, Lacroix and Vaienti. Second, we show that for positive entropy systems, the probability of observing any n-string in n consecutive observations goes to zero as n goes to infinity. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Recent fears of terrorism have provoked an increase in delays and denials of transboundary shipments of radioisotopes. This represents a serious constraint to sterile insect technique (SIT) programs around the world as they rely on the use of ionizing radiation from radioisotopes for insect sterilization. To validate a novel X ray irradiator, a series of studies on Ceratitis capitata (Wiedemann) and Anastrepha fraterculus (Wiedemann) (Diptera: Tephritidae) were carried out, comparing the relative biological effectiveness (RBE) between X rays and traditional gamma radiation from (60)Co. Male C. capitata pupae and pupae of both sexes of A. fraterculus, both 24 - 48 h before adult emergence, were irradiated with doses ranging from 15 to 120 Gy and 10-70 Gy, respectively. Estimated mean doses of 91.2 Gy of X and 124.9 Gy of gamma radiation induced 99% sterility in C. capitata males, Irradiated A. fraterculus were 99% sterile at approximate to 40-60 Gy for both radiation treatments. Standard quality control parameters and mating indices were not significantly affected by the two types of radiation. The RBE did not differ significantly between the tested X and gamma radiation, and X rays are as biologically effective for SIT purposes as gamma rays are. This work confirms the suitability of this new generation of X ray irradiators for pest control programs that integrate the SIT.
Resumo:
We investigated the effect of joint immobilization on the postural sway during quiet standing. We hypothesized that the center of pressure (COP), rambling, and trembling trajectories would be affected by joint immobilization. Ten young adults stood on a force plate during 60 s without and with immobilized joints (only knees constrained, CK; knees and hips, CH; and knees, hips, and trunk, CT). with their eyes open (OE) or closed (CE). The root mean square deviation (RMS, the standard deviation from the mean) and mean speed of COP, rambling, and trembling trajectories in the anterior-posterior and medial-lateral directions were analyzed. Similar effects of vision were observed for both directions: larger amplitudes for all variables were observed in the CE condition. In the anterior-posterior direction, postural sway increased only when the knees, hips, and trunk were immobilized. For the medial-lateral direction, the RMS and the mean speed of the COP, rambling, and trembling displacements decreased after immobilization of knees and hips and knees, hips, and trunk. These findings indicate that the single inverted pendulum model is unable to completely explain the processes involved in the control of the quiet upright stance in the anterior-posterior and medial-lateral directions. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Myocardial infarction (MI) has been associated with increases in reactive oxygen species (ROS). Exercise training (ET) has been shown to exert positive modulations on vascular function and the purpose of the present study was to investigate the effect of moderate ET on the aortic superoxide production index, NAD(P)H oxidase activity, superoxide dismutase activity and vasomotor response in MI rats. Aerobic ET was performed during 11 weeks. Myocardial infarction significantly diminished maximal exercise capacity, and increased vasoconstrictory response to norepinephrine, which was related to the increased activity of NAD(P)H oxidase and basal superoxide production. On the other hand, ET normalized the superoxide production mostly due to decreased NAD(P)H oxidase activity, although a minor SOD effect may also be present. These adaptations were paralleled by normalization in the vasoconstrictory response to norepinephrine. Thus, diminished ROS production seems to be an important mechanism by which ET mediates its beneficial vascular effects in the MI condition.
Resumo:
Self controlling practice implies a process of decision making which suggests that the options in a self controlled practice condition could affect learners The number of task components with no fixed position in a movement sequence may affect the (Nay learners self control their practice A 200 cm coincident timing track with 90 light emitting diodes (LEDs)-the first and the last LEDs being the warning and the target lights respectively was set so that the apparent speed of the light along the track was 1 33 m/sec Participants were required to touch six sensors sequentially the last one coincidently with the lighting of the tar get light (timing task) Group 1 (n=55) had only one constraint and were instructed to touch the sensors in any order except for the last sensor which had to be the one positioned close to the target light Group 2 (n=53) had three constraints the first two and the last sensor to be touched Both groups practiced the task until timing error was less than 30 msec on three consecutive trials There were no statistically significant differences between groups in the number of trials needed to reach the performance criterion but (a) participants in Group 2 created fewer sequences corn pared to Group 1 and (b) were more likely to use the same sequence throughout the learning process The number of options for a movement sequence affected the way learners self-controlled their practice but had no effect on the amount of practice to reach criterion performance.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.