971 resultados para Supervised training
Resumo:
Study Objectives: To test the effects of exercise training on sleep and neurovascular control in patients with systolic heart failure with and without sleep disordered breathing. Design: Prospective interventional study. Setting: Cardiac rehabilitation and exercise physiology unit and sleep laboratory. Patients: Twenty-five patients with heart failure, aged 42 to 70 years, and New York Heart Association Functional Class I-III were divided into 1 of 3 groups: obstructive sleep apnea (n = 8), central sleep apnea (n 9) and no sleep apnea (n = 7). Interventions: Four months of no-training (control) followed by 4 months of an exercise training program (three 60-minute, supervised, exercise sessions per week). Measures and Results: Sleep (polysomnography), microneurography, forearm blood flow (plethysmography), peak VO(2). and quality of life were evaluated at baseline and at the end of the control and trained periods. No significant changes occurred in the control period. Exercise training reduced muscle sympathetic nerve activity (P < 0.001) and increased forearm blood flow (P < 0.01), peak VO(2) (P < 0.01), and quality of life (P < 0.01) in all groups, independent of the presence of sleep apnea. Exercise training improved the apnea-hypopnea index, minimum O(2) saturation, and amount stage 3-4 sleep (P < 0.05) in patients with obstructive sleep apnea but had no significant effects in patients with central sleep apnea. Conclusions. The beneficial effects of exercise training on neurovascular function, functional capacity, and quality of life in patients with systolic dysfunction and heart failure occurs independently of sleep disordered breathing. Exercise training lessens the severity of obstructive sleep apnea but does not affect central sleep apnea in patients with heart failure and sleep disordered breathing.
Resumo:
Purpose: To evaluate the effects of a six months exercise training program on walking capacity, fatigue and health related quality of life (HRQL). Relevance: Familial amyloidotic polyneuropathy disease (FAP) is an autossomic neurodegenerative disease, related with systemic deposition of amyloidal fibre mainly on peripheral nervous system and mainly produced in the liver. FAP often results in severe functional limitations. Liver transplantation is used as the only therapy so far, that stop the progression of some aspects of this disease. Transplantation requires aggressive medication which impairs muscle metabolism and associated to surgery process and previous possible functional impairments, could lead to serious deconditioning. Reports of fatigue are common feature in transplanted patients. The effect of supervised or home-based exercise training programs in FAP patients after a liver transplant (FAPTX) is currently unknown.
Resumo:
Familial amyloidotic polyneuropathy is a systemic deposition of amyloidal fibre mainly on peripheral nervous system (but also in other systems like heart, gastrointestinal tract, kidneys, etc) and mainly produced in the liver. Purpose of this study: to evaluate the effects of a six months exercise training program(supervised or home-based) on walking capacity, fatigue and health related quality of life (HRQL) on Familial Amyloidotic Polyneuropathy patients submitted to a liver transplant.
Resumo:
Abstract Background: Numerous studies show the benefits of exercise training after myocardial infarction (MI). Nevertheless, the effects on function and remodeling are still controversial. Objectives: To evaluate, in patients after (MI), the effects of aerobic exercise of moderate intensity on ventricular remodeling by cardiac magnetic resonance imaging (CMR). Methods: 26 male patients, 52.9 ± 7.9 years, after a first MI, were assigned to groups: trained group (TG), 18; and control group (CG), 8. The TG performed supervised aerobic exercise on treadmill twice a week, and unsupervised sessions on 2 additional days per week, for at least 3 months. Laboratory tests, anthropometric measurements, resting heart rate (HR), exercise test, and CMR were conducted at baseline and follow-up. Results: The TG showed a 10.8% reduction in fasting blood glucose (p = 0.01), and a 7.3-bpm reduction in resting HR in both sitting and supine positions (p < 0.0001). There was an increase in oxygen uptake only in the TG (35.4 ± 8.1 to 49.1 ± 9.6 mL/kg/min, p < 0.0001). There was a statistically significant decrease in the TG left ventricular mass (LVmass) (128.7 ± 38.9 to 117.2 ± 27.2 g, p = 0.0032). There were no statistically significant changes in the values of left ventricular end-diastolic volume (LVEDV) and ejection fraction in the groups. The LVmass/EDV ratio demonstrated a statistically significant positive remodeling in the TG (p = 0.015). Conclusions: Aerobic exercise of moderate intensity improved physical capacity and other cardiovascular variables. A positive remodeling was identified in the TG, where a left ventricular diastolic dimension increase was associated with LVmass reduction.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Introduction: The SMILING project, a multicentric project fundedby the European Union, aims to develop a new gait and balance trainingprogram to prevent falls in older persons. The program includes the"SMILING shoe", an innovative device that generates mechanical perturbationwhile walking by changing the soles' inclination. Induced perturbationschallenge subjects' balance and force them to react to avoidfalls. By training specifically the complex motor reactions used to maintainbalance when walking on irregular ground, the program will improvesubjects' ability to react in situation of unsteadiness and reduce theirrisk of falling. Methods: The program will be evaluated in a multicentric,cross-over randomized controlled trial. Overall, 112 subjects (aged≥65 years, ≥1 falls, POMA score 22-26/28) will be enrolled. Subjectswill be randomised in 2 groups: group A begin the training with active"SMILING shoes", group B with inactive dummy shoes. After 4 weeksof training, group A and B will exchange the shoes. Supervised trainingsessions (30 minutes twice a week for 8 weeks) include walkingtasks of progressive difficulties.To avoid a learning effect, "SMILINGshoes" perturbations will be generated in a non-linear and chaotic way.Gait performance, fear of falling, and acceptability of the program willbe assessed. Conclusion: The SMILING program is an innovative interventionfor falls prevention in older persons based on gait and balancetraining using chaotic perturbations. Because of the easy use of the"SMILING shoes", this program could be used in various settings, suchas geriatric clinics or at home.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
This study aimed to evaluate the effects of carvedilol treatment and a regimen of supervised aerobic exercise training on quality of life and other clinical, echocardiographic, and biochemical variables in a group of client-owned dogs with chronic mitral valve disease (CMVD). Ten healthy dogs (control) and 36 CMVD dogs were studied, with the latter group divided into 3 subgroups. In addition to conventional treatment (benazepril, 0.3-0.5 mg/kg once a day, and digoxin, 0.0055 mg/kg twice daily), 13 dogs received exercise training (subgroup I; 10.3±2.1 years), 10 dogs received carvedilol (0.3 mg/kg twice daily) and exercise training (subgroup II; 10.8±1.7 years), and 13 dogs received only carvedilol (subgroup III; 10.9±2.1 years). All drugs were administered orally. Clinical, laboratory, and Doppler echocardiographic variables were evaluated at baseline and after 3 and 6 months. Exercise training was conducted from months 3-6. The mean speed rate during training increased for both subgroups I and II (ANOVA, P>0.001), indicating improvement in physical conditioning at the end of the exercise period. Quality of life and functional class was improved for all subgroups at the end of the study. The N-terminal pro-brain natriuretic peptide (NT-proBNP) level increased in subgroup I from baseline to 3 months, but remained stable after training introduction (from 3 to 6 months). For subgroups II and III, NT-proBNP levels remained stable during the entire study. No difference was observed for the other variables between the three evaluation periods. The combination of carvedilol or exercise training with conventional treatment in CMVD dogs led to improvements in quality of life and functional class. Therefore, light walking in CMVD dogs must be encouraged.
Resumo:
Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche. Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images. Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation. Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs.
Resumo:
Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde.
Resumo:
In this paper, a new methodology for the prediction of scoliosis curve types from non invasive acquisitions of the back surface of the trunk is proposed. One hundred and fifty-nine scoliosis patients had their back surface acquired in 3D using an optical digitizer. Each surface is then characterized by 45 local measurements of the back surface rotation. Using a semi-supervised algorithm, the classifier is trained with only 32 labeled and 58 unlabeled data. Tested on 69 new samples, the classifier succeeded in classifying correctly 87.0% of the data. After reducing the number of labeled training samples to 12, the behavior of the resulting classifier tends to be similar to the reference case where the classifier is trained only with the maximum number of available labeled data. Moreover, the addition of unlabeled data guided the classifier towards more generalizable boundaries between the classes. Those results provide a proof of feasibility for using a semi-supervised learning algorithm to train a classifier for the prediction of a scoliosis curve type, when only a few training data are labeled. This constitutes a promising clinical finding since it will allow the diagnosis and the follow-up of scoliotic deformities without exposing the patient to X-ray radiations.
Resumo:
Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.
Resumo:
The design of binary morphological operators that are translation-invariant and locally defined by a finite neighborhood window corresponds to the problem of designing Boolean functions. As in any supervised classification problem, morphological operators designed from a training sample also suffer from overfitting. Large neighborhood tends to lead to performance degradation of the designed operator. This work proposes a multilevel design approach to deal with the issue of designing large neighborhood-based operators. The main idea is inspired by stacked generalization (a multilevel classifier design approach) and consists of, at each training level, combining the outcomes of the previous level operators. The final operator is a multilevel operator that ultimately depends on a larger neighborhood than of the individual operators that have been combined. Experimental results show that two-level operators obtained by combining operators designed on subwindows of a large window consistently outperform the single-level operators designed on the full window. They also show that iterating two-level operators is an effective multilevel approach to obtain better results.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)