571 resultados para classifiers
Resumo:
Sin duda, el rostro humano ofrece mucha más información de la que pensamos. La cara transmite sin nuestro consentimiento señales no verbales, a partir de las interacciones faciales, que dejan al descubierto nuestro estado afectivo, actividad cognitiva, personalidad y enfermedades. Estudios recientes [OFT14, TODMS15] demuestran que muchas de nuestras decisiones sociales e interpersonales derivan de un previo análisis facial de la cara que nos permite establecer si esa persona es confiable, trabajadora, inteligente, etc. Esta interpretación, propensa a errores, deriva de la capacidad innata de los seres humanas de encontrar estas señales e interpretarlas. Esta capacidad es motivo de estudio, con un especial interés en desarrollar métodos que tengan la habilidad de calcular de manera automática estas señales o atributos asociados a la cara. Así, el interés por la estimación de atributos faciales ha crecido rápidamente en los últimos años por las diversas aplicaciones en que estos métodos pueden ser utilizados: marketing dirigido, sistemas de seguridad, interacción hombre-máquina, etc. Sin embargo, éstos están lejos de ser perfectos y robustos en cualquier dominio de problemas. La principal dificultad encontrada es causada por la alta variabilidad intra-clase debida a los cambios en la condición de la imagen: cambios de iluminación, oclusiones, expresiones faciales, edad, género, etnia, etc.; encontradas frecuentemente en imágenes adquiridas en entornos no controlados. Este de trabajo de investigación estudia técnicas de análisis de imágenes para estimar atributos faciales como el género, la edad y la postura, empleando métodos lineales y explotando las dependencias estadísticas entre estos atributos. Adicionalmente, nuestra propuesta se centrará en la construcción de estimadores que tengan una fuerte relación entre rendimiento y coste computacional. Con respecto a éste último punto, estudiamos un conjunto de estrategias para la clasificación de género y las comparamos con una propuesta basada en un clasificador Bayesiano y una adecuada extracción de características. Analizamos en profundidad el motivo de porqué las técnicas lineales no han logrado resultados competitivos hasta la fecha y mostramos cómo obtener rendimientos similares a las mejores técnicas no-lineales. Se propone un segundo algoritmo para la estimación de edad, basado en un regresor K-NN y una adecuada selección de características tal como se propuso para la clasificación de género. A partir de los experimentos desarrollados, observamos que el rendimiento de los clasificadores se reduce significativamente si los ´estos han sido entrenados y probados sobre diferentes bases de datos. Hemos encontrado que una de las causas es la existencia de dependencias entre atributos faciales que no han sido consideradas en la construcción de los clasificadores. Nuestro resultados demuestran que la variabilidad intra-clase puede ser reducida cuando se consideran las dependencias estadísticas entre los atributos faciales de el género, la edad y la pose; mejorando el rendimiento de nuestros clasificadores de atributos faciales con un coste computacional pequeño. Abstract Surely the human face provides much more information than we think. The face provides without our consent nonverbal cues from facial interactions that reveal our emotional state, cognitive activity, personality and disease. Recent studies [OFT14, TODMS15] show that many of our social and interpersonal decisions derive from a previous facial analysis that allows us to establish whether that person is trustworthy, hardworking, intelligent, etc. This error-prone interpretation derives from the innate ability of human beings to find and interpret these signals. This capability is being studied, with a special interest in developing methods that have the ability to automatically calculate these signs or attributes associated with the face. Thus, the interest in the estimation of facial attributes has grown rapidly in recent years by the various applications in which these methods can be used: targeted marketing, security systems, human-computer interaction, etc. However, these are far from being perfect and robust in any domain of problems. The main difficulty encountered is caused by the high intra-class variability due to changes in the condition of the image: lighting changes, occlusions, facial expressions, age, gender, ethnicity, etc.; often found in images acquired in uncontrolled environments. This research work studies image analysis techniques to estimate facial attributes such as gender, age and pose, using linear methods, and exploiting the statistical dependencies between these attributes. In addition, our proposal will focus on the construction of classifiers that have a good balance between performance and computational cost. We studied a set of strategies for gender classification and we compare them with a proposal based on a Bayesian classifier and a suitable feature extraction based on Linear Discriminant Analysis. We study in depth why linear techniques have failed to provide competitive results to date and show how to obtain similar performances to the best non-linear techniques. A second algorithm is proposed for estimating age, which is based on a K-NN regressor and proper selection of features such as those proposed for the classification of gender. From our experiments we note that performance estimates are significantly reduced if they have been trained and tested on different databases. We have found that one of the causes is the existence of dependencies between facial features that have not been considered in the construction of classifiers. Our results demonstrate that intra-class variability can be reduced when considering the statistical dependencies between facial attributes gender, age and pose, thus improving the performance of our classifiers with a reduced computational cost.
Resumo:
A more natural, intuitive, user-friendly, and less intrusive Human–Computer interface for controlling an application by executing hand gestures is presented. For this purpose, a robust vision-based hand-gesture recognition system has been developed, and a new database has been created to test it. The system is divided into three stages: detection, tracking, and recognition. The detection stage searches in every frame of a video sequence potential hand poses using a binary Support Vector Machine classifier and Local Binary Patterns as feature vectors. These detections are employed as input of a tracker to generate a spatio-temporal trajectory of hand poses. Finally, the recognition stage segments a spatio-temporal volume of data using the obtained trajectories, and compute a video descriptor called Volumetric Spatiograms of Local Binary Patterns (VS-LBP), which is delivered to a bank of SVM classifiers to perform the gesture recognition. The VS-LBP is a novel video descriptor that constitutes one of the most important contributions of the paper, which is able to provide much richer spatio-temporal information than other existing approaches in the state of the art with a manageable computational cost. Excellent results have been obtained outperforming other approaches of the state of the art.
Resumo:
A anotação geográfica de documentos consiste na adoção de metadados para a identificação de nomes de locais e a posição de suas ocorrências no texto. Esta informação é útil, por exemplo, para mecanismos de busca. A partir dos topônimos mencionados no texto é possível identificar o contexto espacial em que o assunto do texto está inserido, o que permite agrupar documentos que se refiram a um mesmo contexto, atribuindo ao documento um escopo geográfico. Esta Dissertação de Mestrado apresenta um novo método, batizado de Geofier, para determinação do escopo geográfico de documentos. A novidade apresentada pelo Geofier é a possibilidade da identificação do escopo geográfico de um documento por meio de classificadores de aprendizagem de máquina treinados sem o uso de um gazetteer e sem premissas quanto à língua dos textos analisados. A Wikipédia foi utilizada como fonte de um conjunto de documentos anotados geograficamente para o treinamento de uma hierarquia de Classificadores Naive Bayes e Support Vector Machines (SVMs). Uma comparação de desempenho entre o Geofier e uma reimplementação do sistema Web-a-Where foi realizada em relação à determinação do escopo geográfico dos textos da Wikipédia. A hierarquia do Geofier foi treinada e avaliada de duas formas: usando topônimos do mesmo gazetteer que o Web-a-Where e usando n-gramas extraídos dos documentos de treinamento. Como resultado, o Geofier manteve desempenho superior ao obtido pela reimplementação do Web-a-Where.
Resumo:
Este artículo presenta un nuevo algoritmo de fusión de clasificadores a partir de su matriz de confusión de la que se extraen los valores de precisión (precision) y cobertura (recall) de cada uno de ellos. Los únicos datos requeridos para poder aplicar este nuevo método de fusión son las clases o etiquetas asignadas por cada uno de los sistemas y las clases de referencia en la parte de desarrollo de la base de datos. Se describe el algoritmo propuesto y se recogen los resultados obtenidos en la combinación de las salidas de dos sistemas participantes en la campaña de evaluación de segmentación de audio Albayzin 2012. Se ha comprobado la robustez del algoritmo, obteniendo una reducción relativa del error de segmentación del 6.28% utilizando para realizar la fusión el sistema con menor y mayor tasa de error de los presentados a la evaluación.
Resumo:
El campo de procesamiento de lenguaje natural (PLN), ha tenido un gran crecimiento en los últimos años; sus áreas de investigación incluyen: recuperación y extracción de información, minería de datos, traducción automática, sistemas de búsquedas de respuestas, generación de resúmenes automáticos, análisis de sentimientos, entre otras. En este artículo se presentan conceptos y algunas herramientas con el fin de contribuir al entendimiento del procesamiento de texto con técnicas de PLN, con el propósito de extraer información relevante que pueda ser usada en un gran rango de aplicaciones. Se pueden desarrollar clasificadores automáticos que permitan categorizar documentos y recomendar etiquetas; estos clasificadores deben ser independientes de la plataforma, fácilmente personalizables para poder ser integrados en diferentes proyectos y que sean capaces de aprender a partir de ejemplos. En el presente artículo se introducen estos algoritmos de clasificación, se analizan algunas herramientas de código abierto disponibles actualmente para llevar a cabo estas tareas y se comparan diversas implementaciones utilizando la métrica F en la evaluación de los clasificadores.
Resumo:
Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.
Resumo:
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.
Resumo:
Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.
Resumo:
Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.
Resumo:
Thesis (Master, Computing) -- Queen's University, 2016-05-29 18:11:34.114
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
We present a machine learning-based system for automatically computing interpretable, quantitative measures of animal behavior. Through our interactive system, users encode their intuition about behavior by annotating a small set of video frames. These manual labels are converted into classifiers that can automatically annotate behaviors in screen-scale data sets. Our general-purpose system can create a variety of accurate individual and social behavior classifiers for different organisms, including mice and adult and larval Drosophila.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
There has been an abundance of literature on the modelling of hydrocyclones over the past 30 years. However, in the comminution area at least, the more popular commercially available packages (e.g. JKSimMet, Limn, MODSIM) use the models developed by Nageswararao and Plitt in the 1970s, either as published at that time, or with minor modification. With the benefit of 30 years of hindsight, this paper discusses the assumptions and approximations used in developing these models. Differences in model structure and the choice of dependent and independent variables are also considered. Redundancies are highlighted and an assessment made of the general applicability of each of the models, their limitations and the sources of error in their model predictions. This paper provides the latest version of the Nageswararao model based on the above analysis, in a form that can readily be implemented in any suitable programming language, or within a spreadsheet. The Plitt model is also presented in similar form. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Classifications of perinatal deaths have been undertaken for surveillance of causes of death, but also for auditing individual deaths to identify suboptimal care at any level, so that preventive strategies may be implemented. This paper describes the history and development of the paired obstetric and neonatal Perinatal Society of Australia and New Zealand (PSANZ) classifications in the context of other classifications. The PSANZ Perinatal Death Classification is based on obstetric antecedent factors that initiated the sequence of events leading to the death, and was developed largely from the Aberdeen and Whitfield classifications. The PSANZ Neonatal Death Classification is based on fetal and neonatal factors associated with the death. The classifications, accessible on the PSANZ website (http://www.psanz.org), have definitions and guidelines for use, a high level of agreement between classifiers, and are now being used in nearly all Australian states and New Zealand.