910 resultados para articulated motion structure learning
Resumo:
Na Europa e nas últimas décadas do Século XX, a emergência da Sociedade de Informação veio impor às organizações a necessidade de que, para além das inovações tecnológicas, haja uma preocupação relativamente aos bens intangíveis como a informação, as novas metodologias de trabalho e o know how (Batista, 2002). Paralelamente a estas inovações, as Instituições de Ensino Superior têm contribuído para a evolução do Capital Humano, como ativo intangível intrínseco ao Homem. Em Portugal e no contexto do Ensino/Formação a Distância parecem continuar a existir, ainda, em algumas instituições, problemas de identificação, e de descriminação das vantagens no que concerne à estrutura aberta e flexível, com o estudante/formando a ter algumas dificuldades em adaptar o seu perfil e interesses profissionais ao tipo de aprendizagem que mais se lhe adequa. O e-learning surge como um método de Ensino/Formação a Distância, só possível com a especificidade dos processos pedagógicos e em complementaridade com as Tecnologias de Informação e Comunicação (TIC), uma vez que são estas que lhe dão o suporte necessário à sua concretização. O e-learning ao proporcionar novas formas de comunicação, de interação e de confronto de ideias, permite uma aprendizagem baseada na partilha de saberes, tendo em consideração as experiências e os objetivos profissionais dos formandos. Dentro destes pressupostos, achámos importante fazer uma investigação a partir de Instituições de Ensino Superior Portuguesas, de modo a percebermos qual o papel e a influência que o e-learning desempenha nos objetivos das organizações académicas em geral e no Capital Humano dos seus Estudantes/Formandos em particular. A partir da questão da investigação foram definidos os objetivos e hipóteses de investigação de modo a que ao ser enunciada uma metodologia esta englobe fatores que foquem os elementos necessários à confirmação, ou não, dos pressupostos enunciados. Foi analisada documentação diversa, criado um questionário e conduzidas entrevistas, de modo a obter e potenciar a informação necessária e suficiente para o efeito. A recolha de dados para posterior análise e os resultados depois de interpretados, permitirão responder aos propósitos expressos desde o início da investigação.
Resumo:
Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.
Resumo:
Electron wave motion in a quantum wire with periodic structure is treated by direct solution of the Schrödinger equation as a mode-matching problem. Our method is particularly useful for a wire consisting of several distinct units, where the total transfer matrix for wave propagation is just the product of those for its basic units. It is generally applicable to any linearly connected serial device, and it can be implemented on a small computer. The one-dimensional mesoscopic crystal recently considered by Ulloa, Castaño, and Kirczenow [Phys. Rev. B 41, 12 350 (1990)] is discussed with our method, and is shown to be a strictly one-dimensional problem. Electron motion in the multiple-stub T-shaped potential well considered by Sols et al. [J. Appl. Phys. 66, 3892 (1989)] is also treated. A structure combining features of both of these is investigated
Resumo:
Introduction: Evidence-based medicine (EBM) improves the quality of health care. Courses on how to teach EBM in practice are available, but knowledge does not automatically imply its application in teaching. We aimed to identify and compare barriers and facilitators for teaching EBM in clinical practice in various European countries. Methods: A questionnaire was constructed listing potential barriers and facilitators for EBM teaching in clinical practice. Answers were reported on a 7-point Likert scale ranging from not at all being a barrier to being an insurmountable barrier. Results: The questionnaire was completed by 120 clinical EBM teachers from 11 countries. Lack of time was the strongest barrier for teaching EBM in practice (median 5). Moderate barriers were the lack of requirements for EBM skills and a pyramid hierarchy in health care management structure (median 4). In Germany, Hungary and Poland, reading and understanding articles in English was a higher barrier than in the other countries. Conclusion: Incorporation of teaching EBM in practice faces several barriers to implementation. Teaching EBM in clinical settings is most successful where EBM principles are culturally embedded and form part and parcel of everyday clinical decisions and medical practice.
Resumo:
Electron wave motion in a quantum wire with periodic structure is treated by direct solution of the Schrödinger equation as a mode-matching problem. Our method is particularly useful for a wire consisting of several distinct units, where the total transfer matrix for wave propagation is just the product of those for its basic units. It is generally applicable to any linearly connected serial device, and it can be implemented on a small computer. The one-dimensional mesoscopic crystal recently considered by Ulloa, Castaño, and Kirczenow [Phys. Rev. B 41, 12 350 (1990)] is discussed with our method, and is shown to be a strictly one-dimensional problem. Electron motion in the multiple-stub T-shaped potential well considered by Sols et al. [J. Appl. Phys. 66, 3892 (1989)] is also treated. A structure combining features of both of these is investigated.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
The performance of magnetic nanoparticles is intimately entwined with their structure, mean size and magnetic anisotropy. Besides, ensembles offer a unique way of engineering the magnetic response by modifying the strength of the dipolar interactions between particles. Here we report on an experimental and theoretical analysis of magnetic hyperthermia, a rapidly developing technique in medical research and oncology. Experimentally, we demonstrate that single-domain cubic iron oxide particles resembling bacterial magnetosomes have superior magnetic heating efficiency compared to spherical particles of similar sizes. Monte Carlo simulations at the atomic level corroborate the larger anisotropy of the cubic particles in comparison with the spherical ones, thus evidencing the beneficial role of surface anisotropy in the improved heating power. Moreover we establish a quantitative link between the particle assembling, the interactions and the heating properties. This knowledge opens new perspectives for improved hyperthermia, an alternative to conventional cancer therapies.
Resumo:
Both, Bayesian networks and probabilistic evaluation are gaining more and more widespread use within many professional branches, including forensic science. Notwithstanding, they constitute subtle topics with definitional details that require careful study. While many sophisticated developments of probabilistic approaches to evaluation of forensic findings may readily be found in published literature, there remains a gap with respect to writings that focus on foundational aspects and on how these may be acquired by interested scientists new to these topics. This paper takes this as a starting point to report on the learning about Bayesian networks for likelihood ratio based, probabilistic inference procedures in a class of master students in forensic science. The presentation uses an example that relies on a casework scenario drawn from published literature, involving a questioned signature. A complicating aspect of that case study - proposed to students in a teaching scenario - is due to the need of considering multiple competing propositions, which is an outset that may not readily be approached within a likelihood ratio based framework without drawing attention to some additional technical details. Using generic Bayesian networks fragments from existing literature on the topic, course participants were able to track the probabilistic underpinnings of the proposed scenario correctly both in terms of likelihood ratios and of posterior probabilities. In addition, further study of the example by students allowed them to derive an alternative Bayesian network structure with a computational output that is equivalent to existing probabilistic solutions. This practical experience underlines the potential of Bayesian networks to support and clarify foundational principles of probabilistic procedures for forensic evaluation.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Academic advising is a key element for learning success in virtual environments that has received little attention from researchers. This paper focuses on the organizational arrangements needed for the delivery of academic advising in online higher education. We present the general dimensions of organizational structures (division of labor, hierarchy of authority and formalization) and their possible forms when applied to academic advising. The specific solution adopted at the Open University of Catalonia is described and assessed in order to draw general conclusions of interest for other institutions.
Resumo:
Verkostoitunut kansainvälinen tuotekehitys on tärkeä osa menestystä nykypäivän muuttuvassa yritysmaailmassa. Toimintojen tehostamiseksi myös projektitoiminnot on sopeutettava kansainväliseen toimintaympäristöön. Kilpailukyvyn säilyttämiseksi projektitoimintoja on lisäksi jatkuvasti tehostettava. Yhtenäkeinona nähdään projektioppiminen, jota voidaan edistää monin eri tavoin. Tässätyössä keskitytään projektitiedonhallinnan kehittämisen tuomiin oppimismahdollisuuksiin. Kirjallisuudessa kerrotaan, että projektitiedon jakaminen ja sen hyödyntäminen seuraavissa projekteissa on eräs projektioppimisen edellytyksistä. Tämäon otettu keskeiseksi näkökulmaksi tässä tutkimuksessa. Lisäksi tutkimusalueen rajaamiseksi työ tarkastelee erityisesti projektioppimista kansainvälisten tuotekehitysprojektien välillä. Työn tavoitteena on esitellä keskeisiä projektioppimisen haasteita ja etsiä konkreettinen ratkaisu vastaamaan näihin haasteisiin. Tuotekehitystoiminnot ja kansainvälinen hajautettu projektiorganisaatio kohtaavat lisäksi erityisiä haasteita, kuten tiedon hajautuneisuus, projektihenkilöstön vaihtuvuus, tiedon luottamuksellisuus ja maantieteelliset haasteet (esim. aikavyöhykkeet ja toimipisteen sijainti). Nämä erityishaasteet on otettu huomioon ratkaisua etsittäessä. Haasteisiin päädyttiin vastaamaan tietotekniikkapohjaisella ratkaisulla, joka suunniteltiin erityisesti huomioiden esimerkkiorganisaation tarpeet ja haasteet. Työssä tarkastellaan suunnitellun ratkaisun vaikutusta projektioppimiseen ja kuinka se vastaa havaittuihin haasteisiin. Tuloksissa huomattiin, että projektioppimista tapahtui, vaikka oppimista oli vaikea suoranaisesti huomata tutkimusorganisaation jäsenten keskuudessa. Projektioppimista voidaan kuitenkin sanoa tapahtuvan, jos projektitieto on helposti koko projektiryhmän saatavilla ja se on hyvin järjesteltyä. Muun muassa nämä ehdot täyttyivät. Projektioppiminen nähdään yleisesti haastavana kehitysalueena esimerkkiorganisaatiossa. Suuri osa tietämyksestä on niin sanottua hiljaistatietoa, jota on hankala tai mahdoton saattaa kirjalliseen muotoon. Näin olleen tiedon siirtäminen jää suurelta osin henkilökohtaisen vuorovaikutuksen varaan. Siitä huolimatta projektioppimista on mahdollista kehittää erilaisin toimintamallein ja menetelmin. Kehitys vaatii kuitenkin resursseja, pitkäjänteisyyttä ja aikaa. Monet muutokset voivat vaatia myös organisaatiokulttuurin muutoksen ja vaikuttamista organisaation jäseniin. Motivaatio, positiiviset mielikuvat ja selkeät strategiset tavoitteet luovat vakaan pohjan projektioppimisen kehittämiselle.
Resumo:
In this paper, we consider active sampling to label pixels grouped with hierarchical clustering. The objective of the method is to match the data relationships discovered by the clustering algorithm with the user's desired class semantics. The first is represented as a complete tree to be pruned and the second is iteratively provided by the user. The active learning algorithm proposed searches the pruning of the tree that best matches the labels of the sampled points. By choosing the part of the tree to sample from according to current pruning's uncertainty, sampling is focused on most uncertain clusters. This way, large clusters for which the class membership is already fixed are no longer queried and sampling is focused on division of clusters showing mixed labels. The model is tested on a VHR image in a multiclass classification setting. The method clearly outperforms random sampling in a transductive setting, but cannot generalize to unseen data, since it aims at optimizing the classification of a given cluster structure.
Resumo:
The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.
Resumo:
It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.