826 resultados para Representation of time
Resumo:
The aim of this study was to compare time-motion indicators during judo matches performed by athletes from different age groups. The following age groups were analysed: Pre-Juvenile (13-14 years, n=522), Juvenile (15-16 years, n 353); Junior (19 years, n = 349) and Senior (>20 years, n = 587). The time-motion indicators included: Total Combat Time, Standing Combat Time, Displacement Without Contact, Gripping Time, Groundwork Combat Time and Pause Time. Analysis of variance (ANOVA) one-way and the Tukey test, as well as the Kruskal-Wallis test and Mann-Whitney (for non-parametric data), were conducted, using P < 0.05 as significance level. The results showed that all analysed groups obtained a median of 7 (first quantile - 3, third quantile - 12) sequences of combat/pause cycles. In total time of combat, the result was: for Total Combat Time, Standing Combat Time and Gripping Time: Pre-Juvenile and Senior were significantly longer than Juvenile and Junior. Considering Displacement Without Contact, Junior was significantly longer than all other age groups. For Groundwork Combat Time, Senior was significantly longer than all other age groups and Pre-Juvenile was longer than Junior. These results can be used to improve the physiological performance in intermittent practices, as well as technicaltactical training during judo sessions.
Resumo:
Subterranean organisms are excellent models for chronobiological studies, yet relatively few taxa have been investigated with this focus. Former results were interpreted as a pattern of regression of circadian locomotor activity rhythms in troglobitic (exclusively subterranean) species. In this paper we report results of experiments with cave fishes showing variable degrees of troglomorphism (reduction of eyes, melanic pigmentation and other specializations related to the hypogean life) submitted to light-dark cycles, preceded and followed by several days in constant darkness. Samples from seven species have been monitored in our laboratory for the detection of significant circadian rhythms in locomotor activity: S. typhlops, an extremely troglomophic species, presented the lowest number of significant components in the circadian range (only one individual out of eight in DD1 and three other fish in LD), all weak (low values of spectral power). Higher incidence of circadian components was observed for P. kronei - only one among six studied catfish without significant circadian rhythms under DD1 and DD2; spectral powers were generally high. Intermediate situations were observed for the remaining species, however all of them presented relatively strong significant rhythms under LD. Residual oscillations (circadian rhythms in DD2) were detected in at least part of the studied individuals of all species but S. typhlops, without a correlation with spectral powers of LD rhythms, i.e., individuals exhibiting residual oscillations were not necessarily those with the strongest LD rhythms. In conclusion, the accumulated evidence for troglobitic fishes strongly supports the hypothesis of external, environmental selection for circadian locomotor rhythms.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.
Resumo:
Background: A common approach for time series gene expression data analysis includes the clustering of genes with similar expression patterns throughout time. Clustered gene expression profiles point to the joint contribution of groups of genes to a particular cellular process. However, since genes belong to intricate networks, other features, besides comparable expression patterns, should provide additional information for the identification of functionally similar genes. Results: In this study we perform gene clustering through the identification of Granger causality between and within sets of time series gene expression data. Granger causality is based on the idea that the cause of an event cannot come after its consequence. Conclusions: This kind of analysis can be used as a complementary approach for functional clustering, wherein genes would be clustered not solely based on their expression similarity but on their topological proximity built according to the intensity of Granger causality among them.
Resumo:
This work proposes a system for classification of industrial steel pieces by means of magnetic nondestructive device. The proposed classification system presents two main stages, online system stage and off-line system stage. In online stage, the system classifies inputs and saves misclassification information in order to perform posterior analyses. In the off-line optimization stage, the topology of a Probabilistic Neural Network is optimized by a Feature Selection algorithm combined with the Probabilistic Neural Network to increase the classification rate. The proposed Feature Selection algorithm searches for the signal spectrogram by combining three basic elements: a Sequential Forward Selection algorithm, a Feature Cluster Grow algorithm with classification rate gradient analysis and a Sequential Backward Selection. Also, a trash-data recycling algorithm is proposed to obtain the optimal feedback samples selected from the misclassified ones.
Resumo:
Grabación realizada por Ciencia compartida (Biblioteca Universitaria)
Resumo:
[EN]This paper presents the experimental measurements of isobaric vapor−liquid equilibria (iso-p VLE) and excess volumes (vE) at several temperatures in the interval (288.15 to 328.15) K for six binary systems composed of two alkyl (methyl, ethyl) propanoates and three odd carbon alkanes (C5 to C9). The mixing processes were expansive, vE > 0, with (δvE/δT)p > 0, and endothermic. The installation used to measure the iso-p VLE was improved by controlling three of the variables involved in the experimentation with a PC.
Resumo:
In this thesis we discuss a representation of quantum mechanics and quantum and statistical field theory based on a functional renormalization flow equation for the one-particle-irreducible average effective action, and we employ it to get information on some specific systems.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.