804 resultados para Computational learning theory
Resumo:
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
Background: Despite the progress made on policies and programmes to strengthen primary health care teams’ response to Intimate Partner Violence, the literature shows that encounters between women exposed to IPV and health-care providers are not always satisfactory, and a number of barriers that prevent individual health-care providers from responding to IPV have been identified. We carried out a realist case study, for which we developed and tested a programme theory that seeks to explain how, why and under which circumstances a primary health care team in Spain learned to respond to IPV. Methods: A realist case study design was chosen to allow for an in-depth exploration of the linkages between context, intervention, mechanisms and outcomes as they happen in their natural setting. The first author collected data at the primary health care center La Virgen (pseudonym) through the review of documents, observation and interviews with health systems’ managers, team members, women patients, and members of external services. The quality of the IPV case management was assessed with the PREMIS tool. Results: This study found that the health care team at La Virgen has managed 1) to engage a number of staff members in actively responding to IPV, 2) to establish good coordination, mutual support and continuous learning processes related to IPV, 3) to establish adequate internal referrals within La Virgen, and 4) to establish good coordination and referral systems with other services. Team and individual level factors have triggered the capacity and interest in creating spaces for team leaning, team work and therapeutic responses to IPV in La Virgen, although individual motivation strongly affected this mechanism. Regional interventions did not trigger individual and/ or team responses but legitimated the workings of motivated professionals. Conclusions: The primary health care team of La Virgen is involved in a continuous learning process, even as participation in the process varies between professionals. This process has been supported, but not caused, by a favourable policy for integration of a health care response to IPV. Specific contextual factors of La Virgen facilitated the uptake of the policy. To some extent, the performance of La Virgen has the potential to shape the IPV learning processes of other primary health care teams in Murcia.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
La recherche d'informations s'intéresse, entre autres, à répondre à des questions comme: est-ce qu'un document est pertinent à une requête ? Est-ce que deux requêtes ou deux documents sont similaires ? Comment la similarité entre deux requêtes ou documents peut être utilisée pour améliorer l'estimation de la pertinence ? Pour donner réponse à ces questions, il est nécessaire d'associer chaque document et requête à des représentations interprétables par ordinateur. Une fois ces représentations estimées, la similarité peut correspondre, par exemple, à une distance ou une divergence qui opère dans l'espace de représentation. On admet généralement que la qualité d'une représentation a un impact direct sur l'erreur d'estimation par rapport à la vraie pertinence, jugée par un humain. Estimer de bonnes représentations des documents et des requêtes a longtemps été un problème central de la recherche d'informations. Le but de cette thèse est de proposer des nouvelles méthodes pour estimer les représentations des documents et des requêtes, la relation de pertinence entre eux et ainsi modestement avancer l'état de l'art du domaine. Nous présentons quatre articles publiés dans des conférences internationales et un article publié dans un forum d'évaluation. Les deux premiers articles concernent des méthodes qui créent l'espace de représentation selon une connaissance à priori sur les caractéristiques qui sont importantes pour la tâche à accomplir. Ceux-ci nous amènent à présenter un nouveau modèle de recherche d'informations qui diffère des modèles existants sur le plan théorique et de l'efficacité expérimentale. Les deux derniers articles marquent un changement fondamental dans l'approche de construction des représentations. Ils bénéficient notamment de l'intérêt de recherche dont les techniques d'apprentissage profond par réseaux de neurones, ou deep learning, ont fait récemment l'objet. Ces modèles d'apprentissage élicitent automatiquement les caractéristiques importantes pour la tâche demandée à partir d'une quantité importante de données. Nous nous intéressons à la modélisation des relations sémantiques entre documents et requêtes ainsi qu'entre deux ou plusieurs requêtes. Ces derniers articles marquent les premières applications de l'apprentissage de représentations par réseaux de neurones à la recherche d'informations. Les modèles proposés ont aussi produit une performance améliorée sur des collections de test standard. Nos travaux nous mènent à la conclusion générale suivante: la performance en recherche d'informations pourrait drastiquement être améliorée en se basant sur les approches d'apprentissage de représentations.
Resumo:
La recherche d'informations s'intéresse, entre autres, à répondre à des questions comme: est-ce qu'un document est pertinent à une requête ? Est-ce que deux requêtes ou deux documents sont similaires ? Comment la similarité entre deux requêtes ou documents peut être utilisée pour améliorer l'estimation de la pertinence ? Pour donner réponse à ces questions, il est nécessaire d'associer chaque document et requête à des représentations interprétables par ordinateur. Une fois ces représentations estimées, la similarité peut correspondre, par exemple, à une distance ou une divergence qui opère dans l'espace de représentation. On admet généralement que la qualité d'une représentation a un impact direct sur l'erreur d'estimation par rapport à la vraie pertinence, jugée par un humain. Estimer de bonnes représentations des documents et des requêtes a longtemps été un problème central de la recherche d'informations. Le but de cette thèse est de proposer des nouvelles méthodes pour estimer les représentations des documents et des requêtes, la relation de pertinence entre eux et ainsi modestement avancer l'état de l'art du domaine. Nous présentons quatre articles publiés dans des conférences internationales et un article publié dans un forum d'évaluation. Les deux premiers articles concernent des méthodes qui créent l'espace de représentation selon une connaissance à priori sur les caractéristiques qui sont importantes pour la tâche à accomplir. Ceux-ci nous amènent à présenter un nouveau modèle de recherche d'informations qui diffère des modèles existants sur le plan théorique et de l'efficacité expérimentale. Les deux derniers articles marquent un changement fondamental dans l'approche de construction des représentations. Ils bénéficient notamment de l'intérêt de recherche dont les techniques d'apprentissage profond par réseaux de neurones, ou deep learning, ont fait récemment l'objet. Ces modèles d'apprentissage élicitent automatiquement les caractéristiques importantes pour la tâche demandée à partir d'une quantité importante de données. Nous nous intéressons à la modélisation des relations sémantiques entre documents et requêtes ainsi qu'entre deux ou plusieurs requêtes. Ces derniers articles marquent les premières applications de l'apprentissage de représentations par réseaux de neurones à la recherche d'informations. Les modèles proposés ont aussi produit une performance améliorée sur des collections de test standard. Nos travaux nous mènent à la conclusion générale suivante: la performance en recherche d'informations pourrait drastiquement être améliorée en se basant sur les approches d'apprentissage de représentations.
Resumo:
Publisher's advertisements: [2] p. at end.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
This paper explores the connections between scaffolding, second language learning and bilingual shared reading experiences. A socio- cultural theory of cognition underpins the investigation, which involved implementing a language and culture awareness program (LCAP) in a year 4 classroom and in the school community. Selected passages from observations are used to analyse the learning of three students, particularly in relation to languages other than English (LOTE). As these three case study students interacted in the classroom, at home and in the community, they co-constructed, appropriated and applied knowledge form one language to another. Through scaffolding, social spaces were constructed, where students learning and development were extended through a variety of activities that involved active participation, such as experimenting with language, asking questions and making suggestions. Extending these opportunities for student learning and development is considered in relation to creating teaching and learning environments that celebrate socio-cultural and linguistic diversity.
Resumo:
This study identifies valid orthogonal scales of Gray's animal learning paradigms, upon which his Reinforcement Sensitivity Theory (RST) is based, by determining a revised structure to the Gray-Wilson Personality Questionnaire (GWPQ) (Wilson, Gray, & Barrett, 1990). It is also determined how well Gray's RST scales predict the surface scales of personality, which were measured in terms of Eysenck Personality Profiler (EPP) scales, the EPQ-R and the learning styles questionnaire (LSQ) scales. First, results suggest that independent pathways of RST scales may exist in humans. Second, Fight seems related to Anxiety and not the Fight/Flight system as proposed by RST. Third. a remarkably consistent story emerges in that Extraversion scales are predicted by Fight, Psychoticism scales are predicted by Active-avoidance, Fight and/or Flight, and Neuroticism scales tend not to be predicted at all (except for Anxiety). Fourth, Gray's revised scales are Unrelated to gender and age effects and show a predictable overlap with the LSQ and original GWPQ scales. It is concluded that Gray's model of personality might provide a stable biological basis of many surface scales of personality, but that there must also be other influences on personality. These results question the finer structure of Gray's RST whilst also showing that RST has greater range of applicability than a strict interpretation of theory implies. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Foreign exchange trading has emerged recently as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process will be very helpful. A major issue for traders in the deregulated Foreign Exchange Market is when to sell and when to buy a particular currency in order to maximize profit. This paper presents novel trading strategies based on the machine learning methods of genetic algorithms and reinforcement learning.
Theory-of-mind development in oral deaf children with cochlear implants or conventional hearing aids
Resumo:
Background: In the context of the established finding that theory-of-mind (ToM) growth is seriously delayed in late-signing deaf children, and some evidence of equivalent delays in those learning speech with conventional hearing aids, this study's novel contribution was to explore ToM development in deaf children with cochlear implants. Implants can substantially boost auditory acuity and rates of language growth. Despite the implant, there are often problems socialising with hearing peers and some language difficulties, lending special theoretical interest to the present comparative design. Methods: A total of 52 children aged 4 to 12 years took a battery of false belief tests of ToM. There were 26 oral deaf children, half with implants and half with hearing aids, evenly divided between oral-only versus sign-plus-oral schools. Comparison groups of age-matched high-functioning children with autism and younger hearing children were also included. Results: No significant ToM differences emerged between deaf children with implants and those with hearing aids, nor between those in oral-only versus sign-plus-oral schools. Nor did the deaf children perform any better on the ToM tasks than their age peers with autism. Hearing preschoolers scored significantly higher than all other groups. For the deaf and the autistic children, as well as the preschoolers, rate of language development and verbal maturity significantly predicted variability in ToM, over and above chronological age. Conclusions: The finding that deaf children with cochlear implants are as delayed in ToM development as children with autism and their deaf peers with hearing aids or late sign language highlights the likely significance of peer interaction and early fluent communication with peers and family, whether in sign or in speech, in order to optimally facilitate the growth of social cognition and language.