940 resultados para Temporal constraints analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Central cord syndrome (CCS) is considered the most common incomplete spinal cord injury (SCI). Independent ambulation was achieved in 87-97% in young patients with CCS but no gait analysis studies have been reported before in such pathology. The aim of this study was to analyze the gait characteristics of subjects with CCS and to compare the findings with a healthy age, sex and anthropomorphically matched control group (CG), walking both at a self-selected speed and at the same speed. Methods: Twelve CCS patients and a CG of twenty subjects were analyzed. Kinematic data were obtained using a three-dimensional motion analysis system with two scanner units. The CG were asked to walk at two different speeds, at a self-selected speed and at a slower one, similar to the mean gait speed previously registered in the CCS patient group. Temporal, spatial variables and kinematic variables (maximum and minimum lower limb joint angles throughout the gait cycle in each plane, along with the gait cycle instants of occurrence and the joint range of motion ROM) were compared between the two groups walking at similar speeds. Results: The kinematic parameters were compared when both groups walked at a similar speed, given that there was a significant difference in the self-selected speeds (p < 0.05). Hip abduction and knee flexion at initial contact, as well as minimal knee flexion at stance, were larger in the CCS group (p < 0.05). However, the range of knee and ankle motion in the sagittal plane was greater in the CG group (p < 0.05). The maximal ankle plantar-flexion values in stance phase and at toe off were larger in the CG (p < 0.05). Conclusions: The gait pattern of CCS patients showed a decrease of knee and ankle sagittal ROM during level walking and an increase in hip abduction to increase base of support. The findings of this study help to improve the understanding how CCS affects gait changes in the lower limbs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Microarray techniques have become an important tool to the investigation of genetic relationships and the assignment of different phenotypes. Since microarrays are still very expensive, most of the experiments are performed with small samples. This paper introduces a method to quantify dependency between data series composed of few sample points. The method is used to construct gene co-expression subnetworks of highly significant edges. Results: The results shown here are for an adapted subset of a Saccharomyces cerevisiae gene expression data set with low temporal resolution and poor statistics. The method reveals common transcription factors with a high confidence level and allows the construction of subnetworks with high biological relevance that reveals characteristic features of the processes driving the organism adaptations to specific environmental conditions. Conclusion: Our method allows a reliable and sophisticated analysis of microarray data even under severe constraints. The utilization of systems biology improves the biologists ability to elucidate the mechanisms underlying celular processes and to formulate new hypotheses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dengue virus has a single-stranded positive-sense RNA genome of similar to 10.700 nucleotides with a single open reading frame that encodes three structural (C, prM, and E) and seven nonstructural (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) proteins. It possesses four antigenically distinct serotypes (DENV 1-4). Many phylogenetic studies address particularities of the different serotypes using convenience samples that are not conducive to a spatio-temporal analysis in a single urban setting. We describe the pattern of spread of distinct lineages of DENV-3 circulating in Sao Jose do Rio Preto, Brazil, during 2006. Blood samples from patients presenting dengue-like symptoms were collected for DENV testing. We performed M-N-PCR using primers based on NS5 for virus detection and identification. The fragments were purified from PCR mixtures and sequenced. The positive dengue cases were geo-coded. To type the sequenced samples, 52 reference sequences were aligned. The dataset generated was used for iterative phylogenetic reconstruction with the maximum likelihood criterion. The best demographic model, the rate of growth, rate of evolutionary change, and Time to Most Recent Common Ancestor (TMRCA) were estimated. The basic reproductive rate during the epidemics was estimated. We obtained sequences from 82 patients among 174 blood samples. We were able to geo-code 46 sequences. The alignment generated a 399-nucleotide-long dataset with 134 taxa. The phylogenetic analysis indicated that all samples were of DENV-3 and related to strains circulating on the isle of Martinique in 2000-2001. Sixty DENV-3 from Sao Jose do Rio Preto formed a monophyletic group (lineage 1), closely related to the remaining 22 isolates (lineage 2). We assumed that these lineages appeared before 2006 in different occasions. By transforming the inferred exponential growth rates into the basic reproductive rate, we obtained values for lineage 1 of R(0) = 1.53 and values for lineage 2 of R(0) = 1.13. Under the exponential model, TMRCA of lineage 1 dated 1 year and lineage 2 dated 3.4 years before the last sampling. The possibility of inferring the spatio-temporal dynamics from genetic data has been generally little explored, and it may shed light on DENV circulation. The use of both geographic and temporally structured phylogenetic data provided a detailed view on the spread of at least two dengue viral strains in a populated urban area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Schizophrenia is likely to be a consequence of DNA alterations that, together with environmental factors, will lead to protein expression differences and the ultimate establishment of the illness. The superior temporal gyrus is implicated in schizophrenia and executes functions such as the processing of speech, language skills and sound processing. Methods: We performed an individual comparative proteome analysis using two-dimensional gel electrophoresis of 9 schizophrenia and 6 healthy control patients' left posterior superior temporal gyrus (Wernicke's area - BA22p) identifying by mass spectrometry several protein expression alterations that could be related to the disease. Results: Our analysis revealed 11 downregulated and 14 upregulated proteins, most of them related to energy metabolism. Whereas many of the identified proteins have been previously implicated in schizophrenia, such as fructose-bisphosphate aldolase C, creatine kinase and neuron-specific enolase, new putative disease markers were also identified such as dihydrolipoyl dehydrogenase, tropomyosin 3, breast cancer metastasis-suppressor 1, heterogeneous nuclear ribonucleoproteins C1/C2 and phosphate carrier protein, mitochondrial precursor. Besides, the differential expression of peroxiredoxin 6 (PRDX6) and glial fibrillary acidic protein (GFAP) were confirmed by western blot in schizophrenia prefrontal cortex. Conclusion: Our data supports a dysregulation of energy metabolism in schizophrenia as well as suggests new markers that may contribute to a better understanding of this complex disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Detailed analysis of the dynamic interactions among biological, environmental, social, and economic factors that favour the spread of certain diseases is extremely useful for designing effective control strategies. Diseases like tuberculosis that kills somebody every 15 seconds in the world, require methods that take into account the disease dynamics to design truly efficient control and surveillance strategies. The usual and well established statistical approaches provide insights into the cause-effect relationships that favour disease transmission but they only estimate risk areas, spatial or temporal trends. Here we introduce a novel approach that allows figuring out the dynamical behaviour of the disease spreading. This information can subsequently be used to validate mathematical models of the dissemination process from which the underlying mechanisms that are responsible for this spreading could be inferred. Methodology/Principal Findings: The method presented here is based on the analysis of the spread of tuberculosis in a Brazilian endemic city during five consecutive years. The detailed analysis of the spatio-temporal correlation of the yearly geo-referenced data, using different characteristic times of the disease evolution, allowed us to trace the temporal path of the aetiological agent, to locate the sources of infection, and to characterize the dynamics of disease spreading. Consequently, the method also allowed for the identification of socio-economic factors that influence the process. Conclusions/Significance: The information obtained can contribute to more effective budget allocation, drug distribution and recruitment of human skilled resources, as well as guiding the design of vaccination programs. We propose that this novel strategy can also be applied to the evaluation of other diseases as well as other social processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter to build a consistent and improved metallicity scale. Metallicities are calibrated on high-resolution spectroscopy and checked against four open clusters and a moving group, showing excellent consistency. The new temperature and metallicity scales provide a better match to theoretical isochrones, which are used for a Bayesian analysis of stellar ages. With respect to previous analyses, our stars are on average 100 K hotter and 0.1 dex more metal rich, which shift the peak of the metallicity distribution function around the solar value. From Stromgren photometry we are able to derive for the first time a proxy for [alpha/Fe] abundances, which enables us to perform a tentative dissection of the chemical thin and thick disc. We find evidence for the latter being composed of an old, mildly but systematically alpha-enhanced population that extends to super solar metallicities, in agreement with spectroscopic studies. Our revision offers the largest existing kinematically unbiased sample of the solar neighbourhood that contains full information on kinematics, metallicities, and ages and thus provides better constraints on the physical processes relevant in the build-up of the Milky Way disc, enabling a better understanding of the Sun in a Galactic context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the properties of homogeneous and isotropic flat cosmologies in which the present accelerating stage is powered only by the gravitationally induced creation of cold dark matter (CCDM) particles (Omega(m) = 1). For some matter creation rates proposed in the literature, we show that the main cosmological functions such as the scale factor of the universe, the Hubble expansion rate, the growth factor, and the cluster formation rate are analytically defined. The best CCDM scenario has only one free parameter and our joint analysis involving baryonic acoustic oscillations + cosmic microwave background (CMB) + SNe Ia data yields (Omega) over tilde = 0.28 +/- 0.01 (1 sigma), where (Omega) over tilde (m) is the observed matter density parameter. In particular, this implies that the model has no dark energy but the part of the matter that is effectively clustering is in good agreement with the latest determinations from the large- scale structure. The growth of perturbation and the formation of galaxy clusters in such scenarios are also investigated. Despite the fact that both scenarios may share the same Hubble expansion, we find that matter creation cosmologies predict stronger small scale dynamics which implies a faster growth rate of perturbations with respect to the usual Lambda CDM cosmology. Such results point to the possibility of a crucial observational test confronting CCDM with Lambda CDM scenarios through a more detailed analysis involving CMB, weak lensing, as well as the large-scale structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Be stars undergo outbursts producing a circumstellar disk from the ejected material. The beating of non-radial pulsations has been put forward as a possible mechanism of ejection. Aims. We analyze the pulsational behavior of the early B0.5IVe star HD 49330 observed during the first CoRoT long run towards the Galactical anticenter (LRA1). This Be star is located close to the lower edge of the beta Cephei instability strip in the HR diagram and showed a 0.03 mag outburst during the CoRoT observations. It is thus an ideal case for testing the aforementioned hypothesis. Methods. We analyze the CoRoT light curve of HD 49330 using Fourier methods and non-linear least square fitting. Results. In this star, we find pulsation modes typical of beta Cep stars (p modes) and SPB stars (g modes) with amplitude variations along the run directly correlated with the outburst. These results provide new clues about the origin of the Be phenomenon as well as strong constraints on the seismic modelling of Be stars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports results from a search for nu(mu) -> nu(e) transitions by the MINOS experiment based on a 7 x 10(20) protons-on-target exposure. Our observation of 54 candidate nu(e) events in the far detector with a background of 49.1 +/- 7.0(stat) +/- 2.7(syst) events predicted by the measurements in the near detector requires 2sin(2)(2 theta(13))sin(2)theta(23) < 0.12(0.20) at the 90% C.L. for the normal (inverted) mass hierarchy at delta(CP) = 0. The experiment sets the tightest limits to date on the value of theta(13) for nearly all values of delta(CP) for the normal neutrino mass hierarchy and maximal sin(2)(2 theta(23)).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present rigorous upper and lower bounds for the momentum-space ghost propagator G(p) of Yang-Mills theories in terms of the smallest nonzero eigenvalue (and of the corresponding eigenvector) of the Faddeev-Popov matrix. We apply our analysis to data from simulations of SU(2) lattice gauge theory in Landau gauge, using the largest lattice sizes to date. Our results suggest that, in three and in four space-time dimensions, the Landau gauge ghost propagator is not enhanced as compared to its tree-level behavior. This is also seen in plots and fits of the ghost dressing function. In the two-dimensional case, on the other hand, we find that G(p) diverges as p(-2-2 kappa) with kappa approximate to 0.15, in agreement with A. Maas, Phys. Rev. D 75, 116004 (2007). We note that our discussion is general, although we make an application only to pure gauge theory in Landau gauge. Our simulations have been performed on the IBM supercomputer at the University of Sao Paulo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present rigorous upper and lower bounds for the zero-momentum gluon propagator D(0) of Yang-Mills theories in terms of the average value of the gluon field. This allows us to perform a controlled extrapolation of lattice data to infinite volume, showing that the infrared limit of the Landau-gauge gluon propagator in SU(2) gauge theory is finite and nonzero in three and in four space-time dimensions. In the two-dimensional case, we find D(0)=0, in agreement with Maas. We suggest an explanation for these results. We note that our discussion is general, although we apply our analysis only to pure gauge theory in the Landau gauge. Simulations have been performed on the IBM supercomputer at the University of Sao Paulo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.