994 resultados para Training algorithms
Resumo:
Abstract Background: Resistance training (RT) has been recommended as a non-pharmacological treatment for moderate hypertension. In spite of the important role of exercise intensity on training prescription, there is still no data regarding the effects of RT intensity on severe hypertension (SH). Objective: This study examined the effects of two RT protocols (vertical ladder climbing), performed at different overloads of maximal weight carried (MWC), on blood pressure (BP) and muscle strength of spontaneously hypertensive rats (SHR) with SH. Methods: Fifteen male SHR ENT#091;206 ± 10 mmHg of systolic BP (SBP)ENT#093; and five Wistar Kyoto rats (WKY; 119 ± 10 mmHg of SBP) were divided into 4 groups: sedentary (SED-WKY) and SHR (SED-SHR); RT1-SHR training relative to body weight (~40% of MWC); and RT2-SHR training relative to MWC test (~70% of MWC). Systolic BP and heart rate (HR) were measured weekly using the tail-cuff method. The progression of muscle strength was determined once every fifteen days. The RT consisted of 3 weekly sessions on non-consecutive days for 12-weeks. Results: Both RT protocols prevented the increase in SBP (delta - 5 and -7 mmHg, respectively; p > 0.05), whereas SBP of the SED-SHR group increased by 19 mmHg (p < 0.05). There was a decrease in HR only for the RT1 group (p < 0.05). There was a higher increase in strength in the RT2 (140%; p < 0.05) group as compared with RT1 (11%; p > 0.05). Conclusions: Our data indicated that both RT protocols were effective in preventing chronic elevation of SBP in SH. Additionally, a higher RT overload induced a greater increase in muscle strength.
Resumo:
Abstract Background: Numerous studies show the benefits of exercise training after myocardial infarction (MI). Nevertheless, the effects on function and remodeling are still controversial. Objectives: To evaluate, in patients after (MI), the effects of aerobic exercise of moderate intensity on ventricular remodeling by cardiac magnetic resonance imaging (CMR). Methods: 26 male patients, 52.9 ± 7.9 years, after a first MI, were assigned to groups: trained group (TG), 18; and control group (CG), 8. The TG performed supervised aerobic exercise on treadmill twice a week, and unsupervised sessions on 2 additional days per week, for at least 3 months. Laboratory tests, anthropometric measurements, resting heart rate (HR), exercise test, and CMR were conducted at baseline and follow-up. Results: The TG showed a 10.8% reduction in fasting blood glucose (p = 0.01), and a 7.3-bpm reduction in resting HR in both sitting and supine positions (p < 0.0001). There was an increase in oxygen uptake only in the TG (35.4 ± 8.1 to 49.1 ± 9.6 mL/kg/min, p < 0.0001). There was a statistically significant decrease in the TG left ventricular mass (LVmass) (128.7 ± 38.9 to 117.2 ± 27.2 g, p = 0.0032). There were no statistically significant changes in the values of left ventricular end-diastolic volume (LVEDV) and ejection fraction in the groups. The LVmass/EDV ratio demonstrated a statistically significant positive remodeling in the TG (p = 0.015). Conclusions: Aerobic exercise of moderate intensity improved physical capacity and other cardiovascular variables. A positive remodeling was identified in the TG, where a left ventricular diastolic dimension increase was associated with LVmass reduction.
Resumo:
This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.
Resumo:
Magdeburg, Univ., Fak. für Humanwiss., Diss., 2013
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
Fabienne-Agnes Baumann, Klaus Jenewein, Axel Müller
Resumo:
Some practical aspects of Genetic algorithms’ implementation regarding to life cycle management of electrotechnical equipment are considered.
Resumo:
Magdeburg, Univ., Fak. für Humanwiss., Diss., 2015
Resumo:
It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.
Resumo:
"Vegeu el resum a l'inici del fitxer adjunt."
Resumo:
We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.