184 resultados para supervised neighbor embedding
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e do 2.º Ciclo do Ensino Básico 2014
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações
Resumo:
Relatório de Estágio para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Edificações
Resumo:
Relatório Final de Estágio apresentado à Escola Superior de Dança, com vista à obtenção do grau de Mestre em Ensino de Dança.
Resumo:
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
We investigate the structural and thermodynamic properties of a model of particles with 2 patches of type A and 10 patches of type B. Particles are placed on the sites of a face centered cubic lattice with the patches oriented along the nearest neighbor directions. The competition between the self- assembly of chains, rings, and networks on the phase diagram is investigated by carrying out a systematic investigation of this class of models, using an extension ofWertheim's theory for associating fluids and Monte Carlo numerical simulations. We varied the ratio r epsilon(AB)/epsilon(AA) of the interaction between patches A and B, epsilon(AB), and between A patches, epsilon(AA) (epsilon(BB) is set to theta) as well as the relative position of the A patches, i.e., the angle. between the (lattice) directions of the A patches. We found that both r and theta (60 degrees, 90 degrees, or 120 degrees) have a profound effect on the phase diagram. In the empty fluid regime (r < 1/2) the phase diagram is reentrant with a closed miscibility loop. The region around the lower critical point exhibits unusual structural and thermodynamic behavior determined by the presence of relatively short rings. The agreement between the results of theory and simulation is excellent for theta = 120 degrees but deteriorates as. decreases, revealing the need for new theoretical approaches to describe the structure and thermodynamics of systems dominated by small rings. (C) 2014 AIP Publishing LLC.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar