17 resultados para time-motion analysis
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
We propose a new approach to reduction and abstraction of visual information for robotics vision applications. Basically, we propose to use a multi-resolution representation in combination with a moving fovea for reducing the amount of information from an image. We introduce the mathematical formalization of the moving fovea approach and mapping functions that help to use this model. Two indexes (resolution and cost) are proposed that can be useful to choose the proposed model variables. With this new theoretical approach, it is possible to apply several filters, to calculate disparity and to obtain motion analysis in real time (less than 33ms to process an image pair at a notebook AMD Turion Dual Core 2GHz). As the main result, most of time, the moving fovea allows the robot not to perform physical motion of its robotics devices to keep a possible region of interest visible in both images. We validate the proposed model with experimental results
Resumo:
The purpose of the study was to compare hemiparetic gait overground and on the treadmill. Seventeen chronic stroke patients were included in the study. They walked overground and on a treadmill level at the same speed. The Qualisys Medical AB motion analysis system was used to quantify the joint kinematic of the paretic lower limb and the spatio-temporal parameters on the two conditions: overground walking and treadmill walking on three samples of 5-minutes. During the first sample, the subjects walked on the treadmill with greater cadence, shorter stride length, shorter step time on the lower paretic limb, greater range of motion in the hip and knee, greater knee flexion at the initial contact, more extension of the knee and lower dorsiflexion of the ankle at the stance phase. It is important to emphasize that the maximal knee flexion and ankle dorsiflexion just occurred later on the treadmill. Comparisons between each walking sample on the treadmill hadn t revealed any changes on the gait parameters over time. Nonetheless, when analyzing the third walking sample on the treadmill and overground, some variables showed equivalence as such as the total range of motion of the hip, the knee angle at the initial contact and its maximal extension at the stance phase. In summary, walking on a treadmill, even thought having some influence on the familiarization process, haven t demonstrated a complete change in its characteristics of hemiparetic chronic patients
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis
Resumo:
One of the objectives of this work is the ana1ysis of planar structures using the PBG (photonic Bandgap), a new method of controlling propagation of electromagnetic waves in devices with dielectrics. Here the basic theory of these structures will be presented, as well as applications and determination of certain parameters. In this work the analysis will be performed concerning PBG structures, including the basic theory and applications in planar structures. Considerations are made related to the implementation of devices. Here the TTL (Transverse Transmission Line) method is employed, characterized by the simplicity in the treatment of the equations that govern the propagation of electromagnetic waves in the structure. In this method, the fields in x and z are expressed in function of the fields in the traverse direction y in FTD (Fourier Transform Domain). This method is useful in the determination of the complex propagation constant with application in high frequency and photonics. In this work structures will be approached in micrometric scale operating in frequencies in the range of T erahertz, a first step for operation in the visible spectra. The mathematical basis are approached for the determination of the electromagnetic fields in the structure, based on the method L TT taking into account the dimensions approached in this work. Calculations for the determination of the constant of complex propagation are also carried out. The computational implementation is presented for high frequencies. at the first time the analysis is done with base in open microstrip lines with semiconductor substrate. Finally, considerations are made regarding applications ofthese devices in the area of telecommunications, and suggestions for future
Resumo:
The opening of the Brazilian market of electricity and competitiveness between companies in the energy sector make the search for useful information and tools that will assist in decision making activities, increase by the concessionaires. An important source of knowledge for these utilities is the time series of energy demand. The identification of behavior patterns and description of events become important for the planning execution, seeking improvements in service quality and financial benefits. This dissertation presents a methodology based on mining and representation tools of time series, in order to extract knowledge that relate series of electricity demand in various substations connected of a electric utility. The method exploits the relationship of duration, coincidence and partial order of events in multi-dimensionals time series. To represent the knowledge is used the language proposed by Mörchen (2005) called Time Series Knowledge Representation (TSKR). We conducted a case study using time series of energy demand of 8 substations interconnected by a ring system, which feeds the metropolitan area of Goiânia-GO, provided by CELG (Companhia Energética de Goiás), responsible for the service of power distribution in the state of Goiás (Brazil). Using the proposed methodology were extracted three levels of knowledge that describe the behavior of the system studied, representing clearly the system dynamics, becoming a tool to assist planning activities
Resumo:
The semiarid rainfall regime is northeastern Brazil is highly variable. Climate processes associated with rainfall are complex and their effects may represent extreme situations of drought or floods, which can have adverse effects on society and the environment. The regional economy has a significant agricultural component, which is strongly influenced by weather conditions. Maximum precipitation analysis is traditionally performed using the intensity-duration-frequency (IDF) probabilistic approach. Results from such analysis are typically used in engineering projects involving hydraulic structures such as drainage network systems and road structures. On the other hand, precipitation data analysis may require the adoption of some kind of event identification criteria. The minimum inter-event duration (IMEE) is one of the most used criteria. This study aims to analyze the effect of the IMEE on the obtained rain event properties. For this purpose, a nine-year precipitation time series (2002- 2011) was used. This data was obtained from an automatic raingauge station, installed in an environmentally protected area, Ecological Seridó Station. The results showed that adopted IMEE values has an important effect on the number of events, duration, event height, mean rainfall rate and mean inter-event duration. Furthermore, a higher occurrence of extreme events was observed for small IMEE values. Most events showed average rainfall intensity higher than 2 mm.h-1 regardless of IMEE. The storm coefficient of advance was, in most cases, within the first quartile of the event, regardless of the IMEE value. Time series analysis using partial time series made it possible to adjust the IDF equations to local characteristics
Resumo:
In this work we have studied, by Monte Carlo computer simulation, several properties that characterize the damage spreading in the Ising model, defined in Bravais lattices (the square and the triangular lattices) and in the Sierpinski Gasket. First, we investigated the antiferromagnetic model in the triangular lattice with uniform magnetic field, by Glauber dynamics; The chaotic-frozen critical frontier that we obtained coincides , within error bars, with the paramegnetic-ferromagnetic frontier of the static transition. Using heat-bath dynamics, we have studied the ferromagnetic model in the Sierpinski Gasket: We have shown that there are two times that characterize the relaxation of the damage: One of them satisfy the generalized scaling theory proposed by Henley (critical exponent z~A/T for low temperatures). On the other hand, the other time does not obey any of the known scaling theories. Finally, we have used methods of time series analysis to study in Glauber dynamics, the damage in the ferromagnetic Ising model on a square lattice. We have obtained a Hurst exponent with value 0.5 in high temperatures and that grows to 1, close to the temperature TD, that separates the chaotic and the frozen phases
Resumo:
The study of sunspots consistently contributed to a better understanding of magnetic phenomena of the Sun, as its activity. It was found with the dynamics of sunspots that the Sun has a rotation period of twenty-seven days around your axis. With the help of Project Sun-As-A-Star that solar spectra obtained for more than thirty years we observed oscillations of both the depth of the spectral line and its equivalent width, and analysis of the return information about the characteristics of solar magnetism. It also aims to find patterns of solar magnetic activity cycle and the average period of rotation of the Sun will indicate the spectral lines that are sensitive to magnetic activity and which are not. Sensitive lines how Ti II 5381.0 Å stands as the best indicator of the solar rotation period and also shows different periods of rotation cycles of minimum and maximum magnetic activity. It is the first time we observe clearly distinct rotation periods in the different cycles. The analysis also shows that Ca II 8542.1 Å and HI 6562.0 Å indicate the cycle of magnetic activity of eleven years. Some spectral lines no indicated connection with solar activity, this result can help us search for programs planets using spectroscopic models. Data analysis was performed using the Lomb-Scargle method that makes the time series analysis for unequally spaced data. Observe different rotation periods in the cycles of magnetic activity accounts for a discussion has been debated for many decades. We verified that spectroscopy can also specify the period of stellar rotation, thus being able to generalize the method to other stars
Resumo:
BACKGROUND: Treadmill training with partial body weight support (BWS) has shown many benefits for patients after a stroke. But their findings are not well known when combined with biofeedback. OBJETIVE: The purpose of this study was to evaluate the immediate effects of biofeedback, visual and auditory, combined with treadmill training with BWS on on walking functions of hemiplegic subjects. METHODS: We conducted a clinical trial, randomized controlled trial with 30 subjects in the chronic stage of stroke, underwent treadmill training with BWS (control), combined with visual biofeedback, given by the monitor of the treadmill through the symbolic appearance of feet as the subject gave the step; or auditory biofeedback, using a metronome with a frequency of 115% of the cadence of the individual. The subjects were evaluated by kinematics, and the data obtained by the Motion Analysis System Qualisys. To assess differences between groups and within each group after training was applied to ANOVA 3 x 2 repeated measures. RESULTS: There were no statistical differences between groups in any variable spatio-temporal and angular motion, but within each group there was an increase in walking speed and stride length after the training. The group of visual biofeedback increased the stance period and reduced the swing period and reason of symmetry, and the group auditory biofeedback reduced the double stance period. The range of motion of the knee and ankle and the plantar flexion increased in the visual biofeedback group. CONCLUSION: There are no differences between the immediate effects of gait training on a treadmill with BWS performed with and without visual or auditory biofeedback. However, the visual biofeedback can promote changes in a larger number of variables spatiotemporal and angular gait
Resumo:
In this work I have searched the symbolical sense of a specific place. I have started from the theoretical assumption that places are social relations resulting from material and symbolical conditions developed in a certain time and by certain factors. In this sense, I have analyzed the symbolical aspect of sugar plantation from some literary works created by the writer José Lins do Rego from the state of Paraíba. I intend to analyze the symbolical dimension senses, values and images used by this writer to show the sugar plantation. Giving special attention to the works from the named cycle of sugar plantation , I have searched for the senses and meanings used in José Lins do Rego literary discourse to create a fictional sugar plantation, showing this place in a specific way. Based in cultural history, I have used several sources: literary works, prefaces of books, memory works, journalistic works, letters written by intellectual men and history books. My time of analysis is from 1919 the beginning of José Lins do Rego s intellectual activity - until 1943 publication of Fogo Morto, last literary work that I have analyzed. In symbolical terms, what is sugar plantation, this place that has totally touched José Lins do Regos life and literary work? That was the structural question that has determined the present research
Resumo:
The motion capture is a main tool for quantitative motion analyses. Since the XIX century, several motion caption systems have been developed for biomechanics study, animations, games and movies. The biomechanics and kinesiology involves and depends on knowledge from distinct fields, the engineering and health sciences. A precise human motion analysis requires knowledge from both fields. It is necessary then the use of didactics tools and methods for research and teaching for learning aid. The devices for analysis and motion capture currently that are found on the market and on educational institutes presents difficulties for didactical practice, which are the difficulty of transportation, high cost and limited freedom for the user towards the data acquisition. Therefore, the motion analysis is qualitatively performed or is quantitatively performed in highly complex laboratories. Based is these problems, this work presents the development of a motion capture system for didactic use hence a cheap, light, portable and easily used device with a free software. This design includes the selection of the device, the software development for that and tests. The developed system uses the device Kinect, from Microsoft, for its low cost, low weight, portability and easy use, and delivery tree-dimensional data with only one peripheral device. The proposed programs use the hardware to make motion captures, store them, reproduce them, process the motion data and graphically presents the data.
Resumo:
Analogous to sunspots and solar photospheric faculae, which visibility is modulated by stellar rotation, stellar active regions consist of cool spots and bright faculae caused by the magnetic field of the star. Such starspots are now well established as major tracers used to estimate the stellar rotation period, but their dynamic behavior may also be used to analyze other relevant phenomena such as the presence of magnetic activity and its cycles. To calculate the stellar rotation period, identify the presence of active regions and investigate if the star exhibits or not differential rotation, we apply two methods: a wavelet analysis and a spot model. The wavelet procedure is also applied here to study pulsation in order to identify specific signatures of this particular stellar variability for different types of pulsating variable stars. The wavelet transform has been used as a powerful tool for treating several problems in astrophysics. In this work, we show that the time-frequency analysis of stellar light curves using the wavelet transform is a practical tool for identifying rotation, magnetic activity, and pulsation signatures. We present the wavelet spectral composition and multiscale variations of the time series for four classes of stars: targets dominated by magnetic activity, stars with transiting planets, those with binary transits, and pulsating stars. We applied the Morlet wavelet (6th order), which offers high time and frequency resolution. By applying the wavelet transform to the signal, we obtain the wavelet local and global power spectra. The first is interpreted as energy distribution of the signal in time-frequency space, and the second is obtained by time integration of the local map. Since the wavelet transform is a useful mathematical tool for nonstationary signals, this technique applied to Kepler and CoRoT light curves allows us to clearly identify particular signatures for different phenomena. In particular, patterns were identified for the temporal evolution of the rotation period and other periodicity due to active regions affecting these light curves. In addition, a beat-pattern vii signature in the local wavelet map of pulsating stars over the entire time span was also detected. The second method is based on starspots detection during transits of an extrasolar planet orbiting its host star. As a planet eclipses its parent star, we can detect physical phenomena on the surface of the star. If a dark spot on the disk of the star is partially or totally eclipsed, the integrated stellar luminosity will increase slightly. By analyzing the transit light curve it is possible to infer the physical properties of starspots, such as size, intensity, position and temperature. By detecting the same spot on consecutive transits, it is possible to obtain additional information such as the stellar rotation period in the planetary transit latitude, differential rotation, and magnetic activity cycles. Transit observations of CoRoT-18 and Kepler-17 were used to implement this model.
Resumo:
This work proposes a modified control chart incorporating concepts of time series analysis. Specifically, we considerer Gaussian mixed transition distribution (GMTD) models. The GMTD models are a more general class than the autorregressive (AR) family, in the sense that the autocorrelated processes may present flat stretches, bursts or outliers. In this scenario traditional Shewhart charts are no longer appropriate tools to monitoring such processes. Therefore, Vasilopoulos and Stamboulis (1978) proposed a modified version of those charts, considering proper control limits based on autocorrelated processes. In order to evaluate the efficiency of the proposed technique a comparison with a traditional Shewhart chart (which ignores the autocorrelation structure of the process), a AR(1) Shewhart control chart and a GMTD Shewhart control chart was made. An analytical expression for the process variance, as well as control limits were developed for a particular GMTD model. The ARL was used as a criteria to measure the efficiency of control charts. The comparison was made based on a series generated according to a GMTD model. The results point to the direction that the modified Shewhart GMTD charts have a better performance than the AR(1) Shewhart and the traditional Shewhart.
Resumo:
The time series analysis has played an increasingly important role in weather and climate studies. The success of these studies depends crucially on the knowledge of the quality of climate data such as, for instance, air temperature and rainfall data. For this reason, one of the main challenges for the researchers in this field is to obtain homogeneous series. A time series of climate data is considered homogeneous when the values of the observed data can change only due to climatic factors, i.e., without any interference from external non-climatic factors. Such non-climatic factors may produce undesirable effects in the time series, as unrealistic homogeneity breaks, trends and jumps. In the present work it was investigated climatic time series for the city of Natal, RN, namely air temperature and rainfall time series, for the period spanning from 1961 to 2012. The main purpose was to carry out an analysis in order to check the occurrence of homogeneity breaks or trends in the series under investigation. To this purpose, it was applied some basic statistical procedures, such as normality and independence tests. The occurrence of trends was investigated by linear regression analysis, as well as by the Spearman and Mann-Kendall tests. The homogeneity was investigated by the SNHT, as well as by the Easterling-Peterson and Mann-Whitney-Pettit tests. Analyzes with respect to normality showed divergence in their results. The von Neumann ratio test showed that in the case of the air temperature series the data are not independent and identically distributed (iid), whereas for the rainfall series the data are iid. According to the applied testings, both series display trends. The mean air temperature series displays an increasing trend, whereas the rainfall series shows an decreasing trend. Finally, the homogeneity tests revealed that all series under investigations present inhomogeneities, although they breaks depend on the applied test. In summary, the results showed that the chosen techniques may be applied in order to verify how well the studied time series are characterized. Therefore, these results should be used as a guide for further investigations about the statistical climatology of Natal or even of any other place.