873 resultados para Learning method
Resumo:
Semi-supervised learning is one of the important topics in machine learning, concerning with pattern classification where only a small subset of data is labeled. In this paper, a new network-based (or graph-based) semi-supervised classification model is proposed. It employs a combined random-greedy walk of particles, with competition and cooperation mechanisms, to propagate class labels to the whole network. Due to the competition mechanism, the proposed model has a local label spreading fashion, i.e., each particle only visits a portion of nodes potentially belonging to it, while it is not allowed to visit those nodes definitely occupied by particles of other classes. In this way, a "divide-and-conquer" effect is naturally embedded in the model. As a result, the proposed model can achieve a good classification rate while exhibiting low computational complexity order in comparison to other network-based semi-supervised algorithms. Computer simulations carried out for synthetic and real-world data sets provide a numeric quantification of the performance of the method.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The results of a pedagogical strategy implemented at the University of Sao Paulo at Sao Carlos are presented and discussed. The initiative was conducted in a transportation course offered to Civil Engineering students. The approach is a combination of problem-based learning and project-based learning (PBL) and blended-learning (B-learning). Starting in 2006, a different problem was introduced every year. From 2009 on, however, the problem-based learning concept was expanded to project-based learning. The performance of the students was analyzed using the following elements: (1) grades in course activities; (2) answers from a questionnaire designed for course evaluation; and (3) cognitive maps made to assess the effects of PBL through the comparison of the responses provided by the students involved and those not involved in the experiment. The results showed positive aspects of the method, such as a strong involvement of several students with the subject. A gradual increase in the average scores obtained by the students in the project activities (from 6.77 in 2006 to 8.24 in 2009) was concomitant with a better evaluation of these activities and of the course as a whole (90 and 97% of options "Good" or "Very good" in 2009, respectively). A growing interest in the field of transportation engineering as an alternative for further studies was also noticed. DOI: 10.1061/(ASCE)EI.1943-5541.0000115. (C) 2012 American Society of Civil Engineers.
Resumo:
Aims: to compare the performance of undergraduate students concerning semi-implanted central venous catheter dressing in a simulator, with the assistance of a tutor or of a self-learning tutorial. Method: Randomized controlled trial. The sample consisted of 35 undergraduate nursing students, who were divided into two groups after attending an open dialogue presentation class and watching a video. One group undertook the procedure practice with a tutor and the other with the assistance of a self-learning tutorial. Results: in relation to cognitive knowledge, the two groups had lower performance in the pre-test than in the post-test. The group that received assistance from a tutor performed better in the practical assessment. Conclusion: the simulation undertaken with the assistance of a tutor showed to be the most effective learning strategy when compared to the simulation using a self-learning tutorial. Advances in nursing simulation technology are of upmost importance and the role of the tutor in the learning process should be highlighted, taking into consideration the role this professional plays in knowledge acquisition and in the development of critical-reflexive thoughts and attitudes. (ClinicalTrials.gov Identifier: NCT 01614314).
Resumo:
Competitive learning is an important machine learning approach which is widely employed in artificial neural networks. In this paper, we present a rigorous definition of a new type of competitive learning scheme realized on large-scale networks. The model consists of several particles walking within the network and competing with each other to occupy as many nodes as possible, while attempting to reject intruder particles. The particle's walking rule is composed of a stochastic combination of random and preferential movements. The model has been applied to solve community detection and data clustering problems. Computer simulations reveal that the proposed technique presents high precision of community and cluster detections, as well as low computational complexity. Moreover, we have developed an efficient method for estimating the most likely number of clusters by using an evaluator index that monitors the information generated by the competition process itself. We hope this paper will provide an alternative way to the study of competitive learning.
Resumo:
This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.
Resumo:
This thesis is a collection of five independent but closely related studies. The overall purpose is to approach the analysis of learning outcomes from a perspective that combines three major elements, namely lifelonglifewide learning, human capital, and the benefits of learning. The approach is based on an interdisciplinary perspective of the human capital paradigm. It considers the multiple learning contexts that are responsible for the development of embodied potential – including formal, nonformal and informal learning – and the multiple outcomes – including knowledge, skills, economic, social and others– that result from learning. The studies also seek to examine the extent and relative influence of learning in different contexts on the formation of embodied potential and how in turn that affects economic and social well being. The first study combines the three major elements, lifelonglifewide learning, human capital, and the benefits of learning into one common conceptual framework. This study forms a common basis for the four empirical studies that follow. All four empirical studies use data from the International Adult Literacy Survey (IALS) to investigate the relationships among the major elements of the conceptual framework presented in the first study. Study I. A conceptual framework for the analysis of learning outcomes This study brings together some key concepts and theories that are relevant for the analysis of learning outcomes. Many of the concepts and theories have emerged from varied disciplines including economics, educational psychology, cognitive science and sociology, to name only a few. Accordingly, some of the research questions inherent in the framework relate to different disciplinary perspectives. The primary purpose is to create a common basis for formulating and testing hypotheses as well as to interpret the findings in the empirical studies that follow. In particular, the framework facilitates the process of theorizing and hypothesizing on the relationships and processes concerning lifelong learning as well as their antecedents and consequences. Study II. Determinants of literacy proficiency: A lifelong-lifewide learning perspective This study investigates lifelong and lifewide processes of skill formation. In particular, it seeks to estimate the substitutability and complementarity effects of learning in multiple settings over the lifespan on literacy skill formation. This is done by investigating the predictive capacity of major determinants of literacy proficiency that are associated with a variety of learning contexts including school, home, work, community and leisure. An identical structural model based on previous research is fitted to the IALS data for 18 countries. The results show that even after accounting for all factors, education remains the most important predictor of literacy proficiency. In all countries, however, the total effect of education is significantly mediated through further learning occurring at work, at home and in the community. Therefore, the job and other literacy related factors complement education in predicting literacy proficiency. This result points to a virtual cycle of lifelong learning, particularly to how educational attainment influences other learning behaviours throughout life. In addition, results show that home background as measured by parents’ education is also a strong predictor of literacy proficiency, but in many countries this occurs only if a favourable home background is complemented with some post-secondary education. Study III. The effect of literacy proficiency on earnings: An aggregated occupational approach using the Canadian IALS data This study uses data from the Canadian Adult Literacy Survey to estimate the earnings return to literacy skills. The approach adapts a labour segmented view of the labour market by aggregating occupations into seven types, enabling the estimation of the variable impact of literacy proficiency on earnings, both within and between different types of occupations. This is done using Hierarchical Linear Modeling (HLM). The method used to construct the aggregated occupational classification is based on analysis that considers the role of cognitive and other skills in relation to the nature of occupational tasks. Substantial premiums are found to be associated with some occupational types even after adjusting for within occupational differences in individual characteristics such as schooling, literacy proficiency, labour force experience and gender. Average years of schooling and average levels of literacy proficiency at the between level account for over two-thirds of the premiums. Within occupations, there are significant returns to schooling but they vary depending on the type of occupations. In contrast, the within occupational return of literacy proficiency is not necessarily significant. The latter depends on the type of occupation. Study IV: Determinants of economic and social outcomes from a lifewide learning perspective in Canada In this study the relationship between learning in different contexts, which span the lifewide learning dimension, and individual earnings on the one hand and community participation on the other are examined in separate but comparable models. Data from the Canadian Adult Literacy Survey are used to estimate structural models, which correspond closely to the common conceptual framework outlined in Study I. The findings suggest that the relationship between formal education and economic and social outcomes is complex with confounding effects. The results indicate that learning occurring in different contexts and for different reasons leads to different kinds of benefits. The latter finding suggests a potential trade-off between realizing economic and social benefits through learning that are taken for either job-related or personal-interest related reasons. Study V: The effects of learning on economic and social well being: A comparative analysis Using the same structural model as in Study IV, hypotheses are comparatively examined using the International Adult Literacy Survey data for Canada, Denmark, the Netherlands, Norway, the United Kingdom, and the United States. The main finding from Study IV is confirmed for an additional five countries, namely that the effect of initial schooling on well being is more complex than a direct one and it is significantly mediated by subsequent learning. Additionally, findings suggest that people who devote more time to learning for job-related reasons than learning for personal-interest related reasons experience higher levels of economic well being. Moreover, devoting too much time to learning for personal-interest related reasons has a negative effect on earnings except in Denmark. But the more time people devote to learning for personal-interest related reasons tends to contribute to higher levels of social well being. These results again suggest a trade-off in learning for different reasons and in different contexts.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
The goal of this thesis work is to develop a computational method based on machine learning techniques for predicting disulfide-bonding states of cysteine residues in proteins, which is a sub-problem of a bigger and yet unsolved problem of protein structure prediction. Improvement in the prediction of disulfide bonding states of cysteine residues will help in putting a constraint in the three dimensional (3D) space of the respective protein structure, and thus will eventually help in the prediction of 3D structure of proteins. Results of this work will have direct implications in site-directed mutational studies of proteins, proteins engineering and the problem of protein folding. We have used a combination of Artificial Neural Network (ANN) and Hidden Markov Model (HMM), the so-called Hidden Neural Network (HNN) as a machine learning technique to develop our prediction method. By using different global and local features of proteins (specifically profiles, parity of cysteine residues, average cysteine conservation, correlated mutation, sub-cellular localization, and signal peptide) as inputs and considering Eukaryotes and Prokaryotes separately we have reached to a remarkable accuracy of 94% on cysteine basis for both Eukaryotic and Prokaryotic datasets, and an accuracy of 90% and 93% on protein basis for Eukaryotic dataset and Prokaryotic dataset respectively. These accuracies are best so far ever reached by any existing prediction methods, and thus our prediction method has outperformed all the previously developed approaches and therefore is more reliable. Most interesting part of this thesis work is the differences in the prediction performances of Eukaryotes and Prokaryotes at the basic level of input coding when ‘profile’ information was given as input to our prediction method. And one of the reasons for this we discover is the difference in the amino acid composition of the local environment of bonded and free cysteine residues in Eukaryotes and Prokaryotes. Eukaryotic bonded cysteine examples have a ‘symmetric-cysteine-rich’ environment, where as Prokaryotic bonded examples lack it.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit der Entwicklung eines Funktionsapproximators und dessen Verwendung in Verfahren zum Lernen von diskreten und kontinuierlichen Aktionen: 1. Ein allgemeiner Funktionsapproximator – Locally Weighted Interpolating Growing Neural Gas (LWIGNG) – wird auf Basis eines Wachsenden Neuralen Gases (GNG) entwickelt. Die topologische Nachbarschaft in der Neuronenstruktur wird verwendet, um zwischen benachbarten Neuronen zu interpolieren und durch lokale Gewichtung die Approximation zu berechnen. Die Leistungsfähigkeit des Ansatzes, insbesondere in Hinsicht auf sich verändernde Zielfunktionen und sich verändernde Eingabeverteilungen, wird in verschiedenen Experimenten unter Beweis gestellt. 2. Zum Lernen diskreter Aktionen wird das LWIGNG-Verfahren mit Q-Learning zur Q-LWIGNG-Methode verbunden. Dafür muss der zugrunde liegende GNG-Algorithmus abgeändert werden, da die Eingabedaten beim Aktionenlernen eine bestimmte Reihenfolge haben. Q-LWIGNG erzielt sehr gute Ergebnisse beim Stabbalance- und beim Mountain-Car-Problem und gute Ergebnisse beim Acrobot-Problem. 3. Zum Lernen kontinuierlicher Aktionen wird ein REINFORCE-Algorithmus mit LWIGNG zur ReinforceGNG-Methode verbunden. Dabei wird eine Actor-Critic-Architektur eingesetzt, um aus zeitverzögerten Belohnungen zu lernen. LWIGNG approximiert sowohl die Zustands-Wertefunktion als auch die Politik, die in Form von situationsabhängigen Parametern einer Normalverteilung repräsentiert wird. ReinforceGNG wird erfolgreich zum Lernen von Bewegungen für einen simulierten 2-rädrigen Roboter eingesetzt, der einen rollenden Ball unter bestimmten Bedingungen abfangen soll.
Resumo:
Different types of proteins exist with diverse functions that are essential for living organisms. An important class of proteins is represented by transmembrane proteins which are specifically designed to be inserted into biological membranes and devised to perform very important functions in the cell such as cell communication and active transport across the membrane. Transmembrane β-barrels (TMBBs) are a sub-class of membrane proteins largely under-represented in structure databases because of the extreme difficulty in experimental structure determination. For this reason, computational tools that are able to predict the structure of TMBBs are needed. In this thesis, two computational problems related to TMBBs were addressed: the detection of TMBBs in large datasets of proteins and the prediction of the topology of TMBB proteins. Firstly, a method for TMBB detection was presented based on a novel neural network framework for variable-length sequence classification. The proposed approach was validated on a non-redundant dataset of proteins. Furthermore, we carried-out genome-wide detection using the entire Escherichia coli proteome. In both experiments, the method significantly outperformed other existing state-of-the-art approaches, reaching very high PPV (92%) and MCC (0.82). Secondly, a method was also introduced for TMBB topology prediction. The proposed approach is based on grammatical modelling and probabilistic discriminative models for sequence data labeling. The method was evaluated using a newly generated dataset of 38 TMBB proteins obtained from high-resolution data in the PDB. Results have shown that the model is able to correctly predict topologies of 25 out of 38 protein chains in the dataset. When tested on previously released datasets, the performances of the proposed approach were measured as comparable or superior to the current state-of-the-art of TMBB topology prediction.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
A successful actor often requires a specific acting method or style to enhance their performance. Through theatrical research, rehearsal and performance, an actor can narrow down their seemingly endless search for the most productive methodology. By researching, studying, and applying the methods of Constantin Stanislavski, Stella Adler,and Tadashi Suzuki to my rehearsal process, I have found my most effective acting style: Stella Adler?s method. I utilize this acting method during the performance period of my early professional acting career. Experimental research for this thesis was completed inthe studio. I applied each of the three aforementioned methods to a dramatic/classical monologue. The results I gathered helped me to decide upon Adler?s methodology to carry with me through my upcoming professional auditions and career. From casting resulting from the auditions, I will employ the methodology to my professional work asan actress. Each acting teacher has provided the performance world with a new way to experience their stage time. The methods are unique and enable the actor to find the most dynamic performance through engaging technical skill.
Resumo:
In grapheme-color synesthesia, the letter "c" printed in black may be experienced as red, but typically the color red does not trigger the experience of the letter "c." Therefore, at the level of subjective experience, cross-activation is usually unidirectional. However, recent evidence from digit-color synesthesia suggests that at an implicit level bidirectional cross-activation can occur. Here we demonstrate that this finding is not restricted to this specific type of synesthesia. We introduce a new method that enables the investigation of bidirectionality in other types of synesthesia. We found that a group of grapheme-color synesthetes, but not a control group, showed a startle in response to a color-inducing grapheme after a startle response was conditioned to the specific corresponding color. These results implicate that when the startle response was associated with the real color an association between shock and the grapheme was also established. By this mechanism (i.e. implicit cross-activation) the conditioned response to the real color generalized to the synesthetic color. We suggest that parietal brain areas are responsible for this neural backfiring.