987 resultados para training optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning, some pattern recognition techniques have faced the problem of high computational burden for dataset learning. Among the most widely used techniques, we may highlight Support Vector Machines (SVM), which have obtained very promising results for data classification. However, this classifier requires an expensive training phase, which is dominated by a parameter optimization that aims to make SVM less prone to errors over the training set. In this paper, we model the problem of finding such parameters as a metaheuristic-based optimization task, which is performed through Harmony Search (HS) and some of its variants. The experimental results have showen the robustness of HS-based approaches for such task in comparison against with an exhaustive (grid) search, and also a Particle Swarm Optimization-based implementation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In many sport associations, regardless of level, women and men rarely practice together. Previous studies indicate that work groups are generally more efficient when there is an even distribution between the sexes. Could that also be the case in sports? This study aims to investigate whether the sex composition of a training group affects the effort and performance of the participants. Eleven volunteers participated in the crossover study consisting of three different 150-meter sprint conditions; individually, single-sex group and mixed-sex group. Sprint times, heart rate and RPE were recorded during all three trials. The result of this study suggests that there might be practical benefits in regards to physical performance and effort to exercise in a training group consisting of both sexes instead of training only with the same-sex or individually. The understanding could be useful in areas such as; training optimisation for both athletes and in patient- and rehabilitation groups, increasing efficiency in work environments, in schools and sports clubs striving for both athletic success and gender equality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this work was to present recommendations aiming the aerobic training optimization, from the knowledge of the indexes of functional fitness and their physiological mechanisms. Concerning highly trained athletes, the accuracy in training elaboration can be the safest way to improve aerobic performance, since for these individuals, it is normal that the training load is changeable between an insufficient stimulus and the overtraining syndrome symptoms onset. Therefore, there are several factors that should be taken into account for the elaboration of a training program. The knowledge on fatigue mechanisms and physiological responses at different exercise intensities and durations is essential for the correct training session elaboration. Moreover, high-intensity interval training is indispensable to improve performance in highly trained athletes; however, it should be performed only after adequate recovery period. Thus, a good relationship between coach and athlete is also important for planning suitable recovery periods prior to excessive fatigue. The coach should keep accurate records of training loads and recovery times, learning hence the kinds of loads that can be individually tolerated. Among the important factors that can affect aerobic performance during competition and should be considered, we can name appropriate warm-up planning and adverse environmental conditions. After collecting all this information, it is possible to elaborate the training bases (frequency, volume, intensity and recovery) aiming at progressive improvement of aerobic performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Schon seit einigen Jahrzehnten wird die Sportwissenschaft durch computergestützte Methoden in ihrer Arbeit unterstützt. Mit der stetigen Weiterentwicklung der Technik kann seit einigen Jahren auch zunehmend die Sportpraxis von deren Einsatz profitieren. Mathematische und informatische Modelle sowie Algorithmen werden zur Leistungsoptimierung sowohl im Mannschafts- als auch im Individualsport genutzt. In der vorliegenden Arbeit wird das von Prof. Perl im Jahr 2000 entwickelte Metamodell PerPot an den ausdauerorientierten Laufsport angepasst. Die Änderungen betreffen sowohl die interne Modellstruktur als auch die Art der Ermittlung der Modellparameter. Damit das Modell in der Sportpraxis eingesetzt werden kann, wurde ein Kalibrierungs-Test entwickelt, mit dem die spezifischen Modellparameter an den jeweiligen Sportler individuell angepasst werden. Mit dem angepassten Modell ist es möglich, aus gegebenen Geschwindigkeitsprofilen die korrespondierenden Herzfrequenzverläufe abzubilden. Mit dem auf den Athleten eingestellten Modell können anschliessend Simulationen von Läufen durch die Eingabe von Geschwindigkeitsprofilen durchgeführt werden. Die Simulationen können in der Praxis zur Optimierung des Trainings und der Wettkämpfe verwendet werden. Das Training kann durch die Ermittlung einer simulativ bestimmten individuellen anaeroben Schwellenherzfrequenz optimal gesteuert werden. Die statistische Auswertung der PerPot-Schwelle zeigt signifikante Übereinstimmungen mit den in der Sportpraxis üblichen invasiv bestimmten Laktatschwellen. Die Wettkämpfe können durch die Ermittlung eines optimalen Geschwindigkeitsprofils durch verschiedene simulationsbasierte Optimierungsverfahren unterstützt werden. Bei der neuesten Methode erhält der Athlet sogar im Laufe des Wettkampfs aktuelle Prognosen, die auf den Geschwindigkeits- und Herzfrequenzdaten basieren, die während des Wettkampfs gemessen werden. Die mit PerPot optimierten Wettkampfzielzeiten für die Athleten zeigen eine hohe Prognosegüte im Vergleich zu den tatsächlich erreichten Zielzeiten.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to examine the influence of three different high-intensity interval training (HIT) regimens on endurance performance in highly trained endurance athletes. Methods: Before, and after 2 and 4 wk of training, 38 cyclists and triathletes (mean +/- SD; age = 25 +/- 6 yr; mass = 75 +/- 7 kg; (V)over dot O-2peak = 64.5 +/- 5.2 mL.kg(-1).min(-1)) performed: 1) a progressive cycle test to measure peak oxygen consumption ((V)over dotO(2peak)) and peak aerobic power output (PPO), 2) a time to exhaustion test (T-max) at their (V)over dotO(2peak) power output (P-max), as well as 3) a 40-kin time-trial (TT40). Subjects were matched and assigned to one of four training groups (G(1), N = 8, 8 X 60% T-max P-max, 1:2 work:recovery ratio; G(2), N = 9, 8 X 60% T-max at P-max, recovery at 65% HRmax; G(3), N = 10, 12 X 30 s at 175% PPO, 4.5-min recovery; G(CON), N = 11). In addition to G(1) G(2), and G(3) performing HIT twice per week, all athletes maintained their regular low-intensity training throughout the experimental period. Results: All HIT groups improved TT40 performance (+4.4 to +5.8%) and PPO (+3.0 to +6.2%) significantly more than G(CON) (-0.9 to + 1.1 %; P < 0.05). Furthermore, G(1) (+5.4%) and G(2) (+8.1%) improved their (V)over dot O-2peak significantly more than G(CON) (+ 1.0%; P < 0.05). Conclusion: The present study has shown that when HIT incorporates P-max as the interval intensity and 60% of T-max as the interval duration, already highly trained cyclists can significantly improve their 40-km time trial performance. Moreover, the present data confirm prior research, in that repeated supramaximal HIT can significantly improve 40-km time trial performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Evolutionary algorithms have been widely used for Artificial Neural Networks (ANN) training, being the idea to update the neurons' weights using social dynamics of living organisms in order to decrease the classification error. In this paper, we have introduced Social-Spider Optimization to improve the training phase of ANN with Multilayer perceptrons, and we validated the proposed approach in the context of Parkinson's Disease recognition. The experimental section has been carried out against with five other well-known meta-heuristics techniques, and it has shown SSO can be a suitable approach for ANN-MLP training step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The non-technical loss is not a problem with trivial solution or regional character and its minimization represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. In this paper, we show how to improve the training phase of a neural network-based classifier using a recently proposed meta-heuristic technique called Charged System Search, which is based on the interactions between electrically charged particles. The experiments were carried out in the context of non-technical loss in power distribution systems in a dataset obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others natureinspired optimization techniques for training neural networks. Thus, it is possible to improve some applications on Smart Grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture materials, mix design, and pavement construction are not isolated steps in the concrete paving process. Each affects the other in ways that determine overall pavement quality and long-term performance. However, equipment and procedures commonly used to test concrete materials and concrete pavements have not changed in decades, leaving gaps in our ability to understand and control the factors that determine concrete durability. The concrete paving community needs tests that will adequately characterize the materials, predict interactions, and monitor the properties of the concrete. The overall objectives of this study are (1) to evaluate conventional and new methods for testing concrete and concrete materials to prevent material and construction problems that could lead to premature concrete pavement distress and (2) to examine and refine a suite of tests that can accurately evaluate concrete pavement properties. The project included three phases. In Phase I, the research team contacted each of 16 participating states to gather information about concrete and concrete material tests. A preliminary suite of tests to ensure long-term pavement performance was developed. The tests were selected to provide useful and easy-to-interpret results that can be performed reasonably and routinely in terms of time, expertise, training, and cost. The tests examine concrete pavement properties in five focal areas critical to the long life and durability of concrete pavements: (1) workability, (2) strength development, (3) air system, (4) permeability, and (5) shrinkage. The tests were relevant at three stages in the concrete paving process: mix design, preconstruction verification, and construction quality control. In Phase II, the research team conducted field testing in each participating state to evaluate the preliminary suite of tests and demonstrate the testing technologies and procedures using local materials. A Mobile Concrete Research Lab was designed and equipped to facilitate the demonstrations. This report documents the results of the 16 state projects. Phase III refined and finalized lab and field tests based on state project test data. The results of the overall project are detailed herein. The final suite of tests is detailed in the accompanying testing guide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whilst radial basis function (RBF) equalizers have been employed to combat the linear and nonlinear distortions in modern communication systems, most of them do not take into account the equalizer's generalization capability. In this paper, it is firstly proposed that the. model's generalization capability can be improved by treating the modelling problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets. Then, as a modelling application, a new RBF equalizer learning scheme is introduced based on the directional evolutionary MOO (EMOO). Directional EMOO improves the computational efficiency of conventional EMOO, which has been widely applied in solving MOO problems, by explicitly making use of the directional information. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good performance not only on explaining the training samples but on predicting the unseen samples.