16 resultados para articulated motion structure learning
em Université de Lausanne, Switzerland
Resumo:
Recent findings in neuroscience suggest that adult brain structure changes in response to environmental alterations and skill learning. Whereas much is known about structural changes after intensive practice for several months, little is known about the effects of single practice sessions on macroscopic brain structure and about progressive (dynamic) morphological alterations relative to improved task proficiency during learning for several weeks. Using T1-weighted and diffusion tensor imaging in humans, we demonstrate significant gray matter volume increases in frontal and parietal brain areas following only two sessions of practice in a complex whole-body balancing task. Gray matter volume increase in the prefrontal cortex correlated positively with subject's performance improvements during a 6 week learning period. Furthermore, we found that microstructural changes of fractional anisotropy in corresponding white matter regions followed the same temporal dynamic in relation to task performance. The results make clear how marginal alterations in our ever changing environment affect adult brain structure and elucidate the interrelated reorganization in cortical areas and associated fiber connections in correlation with improvements in task performance.
Resumo:
Purpose: To investigate the accuracy of 4 clinical instruments in the detection of glaucomatous damage. Methods: 102 eyes of 55 test subjects (Age mean = 66.5yrs, range = [39; 89]) underwent Heidelberg Retinal Tomography (HRTIII), (disc area<2.43); and standard automated perimetry (SAP) using Octopus (Dynamic); Pulsar (TOP); and Moorfields Motion Displacement Test (MDT) (ESTA strategy). Eyes were separated into three groups 1) Healthy (H): IOP<21mmHg and healthy discs (clinical examination), 39 subjects, 78 eyes; 2) Glaucoma suspect (GS): Suspicious discs (clinical examination), 12 subjects, 15 eyes; 3) Glaucoma (G): progressive structural or functional loss, 14 subjects, 20 eyes. Clinical diagnostic precision was examined using the cut-off associated with the p<5% normative limit of MD (Octopus/Pulsar), PTD (MDT) and MRA (HRT) analysis. The sensitivity, specificity and accuracy were calculated for each instrument. Results: See table Conclusions: Despite the advantage of defining glaucoma suspects using clinical optic disc examination, the HRT did not yield significantly higher accuracy than functional measures. HRT, MDT and Octopus SAP yielded higher accuracy than Pulsar perimetry, although results did not reach statistical significance. Further studies are required to investigate the structure-function correlations between these instruments.
Resumo:
Purpose: To examine the relationship of functional measurements with structural measures. Methods: 146 eyes of 83 test subjects underwent Heidelberg Retinal Tomography (HRTIII) (disc area<2.43, mphsd<40), and perimetry testing with Octopus (SAP; Dynamic), Pulsar (PP; TOP) and Moorfields MDT (ESTA). Glaucoma was defined as progressive structural or functional loss (20 eyes). Perimetry test points were grouped into 6 sectors based on the estimated optic nerve head angle into which the associated nerve fiber bundle enters (Garway-Heath map). Perimetry summary measures (PSM) (MD SAP/ MD PP/ PTD MDT) were calculated from the average total deviation of each measured threshold from the normal for each sector. We calculated the 95% significance level of the sectorial PSM from the respective normative data. We calculated the percentage agreement with group1 (G1), healthy on HRT and within normal perimetric limits, and group 2 (G2), abnormal on HRT and outside normal perimetric limits. We also examined the relationship of PSM and rim area (RA) in those sectors classified as abnormal by MRA (Moorfields Regression Analysis) of HRT. Results: The mean age was 65 (range= [37, 89]). The global sensitivity versus specificity of each instrument in detecting glaucomatous eyes was: MDT 80% vs. 88%, SAP 80% vs. 80%, PP 70% vs. 89% and HRT 80% vs. 79%. Highest percentage agreement of HRT (respectively G1, G2, sector) with PSM were MDT (89%, 57%, nasal superior), SAP (83%, 74%, temporal superior), PP (74%, 63%, nasal superior). Globally percentage agreement (respectively G1, G2) was MDT (92%, 28%), SAP (87%, 40%) and PP (77%, 49%). Linear regression showed there was no significant trend globally associating RA and PSM. However, sectorally the supero-nasal sector had a statistically significant (p<0.001) trend with each instrument, the associated r2 coefficients are (MDT 0.38 SAP 0.56 and PP 0.39). Conclusions: There were no significant differences in global sensitivity or specificity between instruments. Structure-function relationships varied significantly between instruments and were consistently strongest supero-nasally. Further studies are required to investigate these relationships in detail.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Resumo:
Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.
Resumo:
Introduction: Evidence-based medicine (EBM) improves the quality of health care. Courses on how to teach EBM in practice are available, but knowledge does not automatically imply its application in teaching. We aimed to identify and compare barriers and facilitators for teaching EBM in clinical practice in various European countries. Methods: A questionnaire was constructed listing potential barriers and facilitators for EBM teaching in clinical practice. Answers were reported on a 7-point Likert scale ranging from not at all being a barrier to being an insurmountable barrier. Results: The questionnaire was completed by 120 clinical EBM teachers from 11 countries. Lack of time was the strongest barrier for teaching EBM in practice (median 5). Moderate barriers were the lack of requirements for EBM skills and a pyramid hierarchy in health care management structure (median 4). In Germany, Hungary and Poland, reading and understanding articles in English was a higher barrier than in the other countries. Conclusion: Incorporation of teaching EBM in practice faces several barriers to implementation. Teaching EBM in clinical settings is most successful where EBM principles are culturally embedded and form part and parcel of everyday clinical decisions and medical practice.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
In this paper, we consider active sampling to label pixels grouped with hierarchical clustering. The objective of the method is to match the data relationships discovered by the clustering algorithm with the user's desired class semantics. The first is represented as a complete tree to be pruned and the second is iteratively provided by the user. The active learning algorithm proposed searches the pruning of the tree that best matches the labels of the sampled points. By choosing the part of the tree to sample from according to current pruning's uncertainty, sampling is focused on most uncertain clusters. This way, large clusters for which the class membership is already fixed are no longer queried and sampling is focused on division of clusters showing mixed labels. The model is tested on a VHR image in a multiclass classification setting. The method clearly outperforms random sampling in a transductive setting, but cannot generalize to unseen data, since it aims at optimizing the classification of a given cluster structure.
Resumo:
The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.
Resumo:
Cette thèse comprend trois essais qui abordent l'information le processus d'ap-prentissage ainsi que le risque dans les marchés finances. Elle se concentre d'abord sur les implications à l'équilibre de l'hétérogénéité des agents à travers un processus d'apprentissage comprtemental et de mise à jour de l'information. De plus, elle examine les effets du partage des risques dans un reseau entreprise-fournisseur. Le premier chapitre étudie les effets du biais de disponibili sur l'évaluation des actifs. Ce biais décrit le fait que les agents surestiment l'importance de l'information acquise via l'expérience personnelle. L'hétérogénéité restante des différentes perceptions individuelles amène à une volonté d'échanges. Conformé¬ment aux données empiriques, les jeunes agents échangent plus mais en même temps souffrent d'une performance inférieure. Le deuxième chapitre se penche sur l'impact qu'ont les différences de modelisation entre les agents sur leurs percevons individuelles du processus de prix, dans le contexte des projections de modèles. Les agents sujets à un biais de projection pensent être représentatifs et interprètent les opinions des autres agents comme du bruit. Les agents, avec des modèles plus persistants, perçoivent que les prix réagissent de façon excessive lors des périodes de turbulence. Le troisième chapitre analyse l'impact du partage des risques dans la relation entreprise-fournisseur sur la décision optimale de financement de l'entreprise. Il étudie l'impact sur l'optimisation de la structure du capital ainsi que sur le coût du capital. Les résultats indiquent en particulier qu'un fournisseur avec un effet de levier faible est utile pour le financement d'un nouveau projet d'investissement. Pour des projets très rentables et des fournisseurs à faible effet de levier, le coût des capitaux propres de l'entreprise peut diminuer.