58 resultados para Temporal constraints analysis
Resumo:
We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter to build a consistent and improved metallicity scale. Metallicities are calibrated on high-resolution spectroscopy and checked against four open clusters and a moving group, showing excellent consistency. The new temperature and metallicity scales provide a better match to theoretical isochrones, which are used for a Bayesian analysis of stellar ages. With respect to previous analyses, our stars are on average 100 K hotter and 0.1 dex more metal rich, which shift the peak of the metallicity distribution function around the solar value. From Stromgren photometry we are able to derive for the first time a proxy for [alpha/Fe] abundances, which enables us to perform a tentative dissection of the chemical thin and thick disc. We find evidence for the latter being composed of an old, mildly but systematically alpha-enhanced population that extends to super solar metallicities, in agreement with spectroscopic studies. Our revision offers the largest existing kinematically unbiased sample of the solar neighbourhood that contains full information on kinematics, metallicities, and ages and thus provides better constraints on the physical processes relevant in the build-up of the Milky Way disc, enabling a better understanding of the Sun in a Galactic context.
Resumo:
We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.
Resumo:
We discuss the properties of homogeneous and isotropic flat cosmologies in which the present accelerating stage is powered only by the gravitationally induced creation of cold dark matter (CCDM) particles (Omega(m) = 1). For some matter creation rates proposed in the literature, we show that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate, the growth factor, and the cluster formation rate are analytically defined. The best CCDM scenario has only one free parameter and our joint analysis involving baryonic acoustic oscillations + cosmic microwave background (CMB) + SNe Ia data yields (Omega) over tilde = 0.28 +/- 0.01 (1 sigma), where (Omega) over tilde (m) is the observed matter density parameter. In particular, this implies that the model has no dark energy but the part of the matter that is effectively clustering is in good agreement with the latest determinations from the large- scale structure. The growth of perturbation and the formation of galaxy clusters in such scenarios are also investigated. Despite the fact that both scenarios may share the same Hubble expansion, we find that matter creation cosmologies predict stronger small scale dynamics which implies a faster growth rate of perturbations with respect to the usual Lambda CDM cosmology. Such results point to the possibility of a crucial observational test confronting CCDM with Lambda CDM scenarios through a more detailed analysis involving CMB, weak lensing, as well as the large-scale structure.
Resumo:
Context. Be stars undergo outbursts producing a circumstellar disk from the ejected material. The beating of non-radial pulsations has been put forward as a possible mechanism of ejection. Aims. We analyze the pulsational behavior of the early B0.5IVe star HD 49330 observed during the first CoRoT long run towards the Galactical anticenter (LRA1). This Be star is located close to the lower edge of the beta Cephei instability strip in the HR diagram and showed a 0.03 mag outburst during the CoRoT observations. It is thus an ideal case for testing the aforementioned hypothesis. Methods. We analyze the CoRoT light curve of HD 49330 using Fourier methods and non-linear least square fitting. Results. In this star, we find pulsation modes typical of beta Cep stars (p modes) and SPB stars (g modes) with amplitude variations along the run directly correlated with the outburst. These results provide new clues about the origin of the Be phenomenon as well as strong constraints on the seismic modelling of Be stars.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
This paper reports results from a search for nu(mu) -> nu(e) transitions by the MINOS experiment based on a 7 x 10(20) protons-on-target exposure. Our observation of 54 candidate nu(e) events in the far detector with a background of 49.1 +/- 7.0(stat) +/- 2.7(syst) events predicted by the measurements in the near detector requires 2sin(2)(2 theta(13))sin(2)theta(23) < 0.12(0.20) at the 90% C.L. for the normal (inverted) mass hierarchy at delta(CP) = 0. The experiment sets the tightest limits to date on the value of theta(13) for nearly all values of delta(CP) for the normal neutrino mass hierarchy and maximal sin(2)(2 theta(23)).
Resumo:
Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).
Resumo:
We present rigorous upper and lower bounds for the momentum-space ghost propagator G(p) of Yang-Mills theories in terms of the smallest nonzero eigenvalue (and of the corresponding eigenvector) of the Faddeev-Popov matrix. We apply our analysis to data from simulations of SU(2) lattice gauge theory in Landau gauge, using the largest lattice sizes to date. Our results suggest that, in three and in four space-time dimensions, the Landau gauge ghost propagator is not enhanced as compared to its tree-level behavior. This is also seen in plots and fits of the ghost dressing function. In the two-dimensional case, on the other hand, we find that G(p) diverges as p(-2-2 kappa) with kappa approximate to 0.15, in agreement with A. Maas, Phys. Rev. D 75, 116004 (2007). We note that our discussion is general, although we make an application only to pure gauge theory in Landau gauge. Our simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
We present rigorous upper and lower bounds for the zero-momentum gluon propagator D(0) of Yang-Mills theories in terms of the average value of the gluon field. This allows us to perform a controlled extrapolation of lattice data to infinite volume, showing that the infrared limit of the Landau-gauge gluon propagator in SU(2) gauge theory is finite and nonzero in three and in four space-time dimensions. In the two-dimensional case, we find D(0)=0, in agreement with Maas. We suggest an explanation for these results. We note that our discussion is general, although we apply our analysis only to pure gauge theory in the Landau gauge. Simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A new, simple approach for modeling and assessing the operation and response of the multiline voltage-source controller (VSC)-based flexible ac transmission system controllers, namely the generalized interline power-flow controller (GIPFC) and the interline power-flow controller (IPFC), is presented in this paper. The model and the analysis developed are based on the converters` power balance method which makes use of the d-q orthogonal coordinates to thereafter present a direct solution for these controllers through a quadratic equation. The main constraints and limitations that such devices present while controlling the two independent ac systems considered, will also be evaluated. In order to examine and validate the steady-state model initially proposed, a phase-shift VSC-based GIPFC was also built in the Alternate Transients Program program whose results are also included in this paper. Where applicable, a comparative evaluation between the GIPFC and the IPFC is also presented.
Resumo:
This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The recognition of temporally stable locations with respect to soil water content is of importance for soil water management decisions, especially in sloping land of watersheds. Neutron probe soil water content (0 to 0.8 m), evaluated at 20 dates during a year in the Loess Plateau of China, in a 20 ha watershed dominated by Ust-Sandiic Entisols and Aeolian sandy soils, were used to define their temporal stability through two indices: the standard deviation of relative difference (SDRD) and the mean absolute bias error (MABE). Specific concerns were (a) the relationship of temporal stability with soil depth, (b) the effects of soil texture and land use on temporal stability, and (c) the spatial pattern of the temporal stability. Results showed that temporal stability of soil water content at 0.2 m was significantly weaker than those at the soil depths of 0.6 and 0.8 m. Soil texture can significantly (P<0.05) affect the stability of soil water content except for the existence of an insignificant difference between sandy loam and silt loam textures, while temporal stability of areas covered by bunge needlegrass land was not significantly different from those covered by korshinsk peashrub. Geostatistical analysis showed that the temporal stability was spatially variable in an organized way as inferred by the degree of spatial dependence index. With increasing soil depth, the range of both temporal stability indices showed an increasing trend, being 65.8-120.5 m for SDRD and 148.8-214.1 m for MABE, respectively. This study provides a valuable support for soil water content measurements for soil water management and hydrological applications on sloping land areas. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The knowledge of the relationship between spatial variability of the surface soil water content (theta) and its mean across a spatial domain (theta(m)) is crucial for hydrological modeling and understanding soil water dynamics at different scales. With the aim to compare the soil moisture dynamics and variability between the two land uses and to explore the relationship between the spatial variability of theta and theta(m), this study analyzed sets of surface theta measurements performed with an impedance soil moisture probe, collected 136 times during a period of one year in two transects covering different land uses, i.e., korshinsk peashrub transect (KPT) and bunge needlegrass transect (BNT), in a watershed of the Loess Plateau, China. Results showed that the temporal pattern of theta behaved similarly for the two land uses, with both relative wetter soils during wet period and relative drier soils during dry period recognized in BNT. Soil moisture tended to be temporally stable among different dates, and more stable patterns could be observed for dates with more similar soil water conditions. The magnitude of the spatial variation of theta in KPT was greater than that in ENT. For both land uses, the standard deviation (SD) of theta in general increased as theta(m) increased, a behavior that could be well described with a natural logarithmic function. Convex relationship of CV and theta(m) and the maximum CV for both land uses (43.5% in KPT and 41.0% in BNT) can, therefore, be ascertained. Geostatistical analysis showed that the range in KPT (9.1 m) was shorter than that in BNT (15.1 m). The nugget effects, the structured variability, hence the total variability increased as theta(m) increased. For both land uses, the spatial dependency in general increased with increasing theta(m). 2011 Elsevier B.V. All rights reserved.