886 resultados para Support Vector Machine (SVM)
Resumo:
Impaired eye movements have a long history in schizophrenia research and meet the criteria of a reliable biomarker. However, the effects of cognitive load and task difficulty on saccadic latencies (SL) are less understood. Recent studies showed that SL are strongly task dependent: SL are decreased in tasks with higher cognitive demand, and increased in tasks with lower cognitive demand. The present study investigates SL modulation in patients with schizophrenia and their first-degree relatives. A group of 13 patients suffering from ICD-10 schizophrenia, 10 first-degree relatives, and 24 control subjects performed two different types of visual tasks: a color task and a Landolt ring orientation task. We used video-based oculography to measure SL. We found that patients exhibited a similar unspecific SL pattern in the two different tasks, whereas controls and relatives exhibited 20–26% shorter average latencies in the orientation task (higher cognitive demand) compared to the color task (lower cognitive demand). Also, classification performance using support vector machines suggests that relatives should be assigned to the healthy controls and not to the patient group. Therefore, visual processing of different content does not modulate SL in patients with schizophrenia, but modulates SL in the relatives and healthy controls. The results reflect a specific oculomotor attentional dysfunction in patients with schizophrenia that is a potential state marker, possibly caused by impaired top-down disinhibition of the superior colliculus by frontal/prefrontal areas such as the frontal eye fields.
Resumo:
En esta tesis doctoral se propone una técnica biométrica de verificación en teléfonos móviles consistente en realizar una firma en el aire con la mano que sujeta el teléfono móvil. Los acelerómetros integrados en el dispositivo muestrean las aceleraciones del movimiento de la firma en el aire, generando tres señales temporales que pueden utilizarse para la verificación del usuario. Se proponen varios enfoques para la implementación del sistema de verificación, a partir de los enfoques más utilizados en biometría de firma manuscrita: correspondencia de patrones, con variantes de los algoritmos de Needleman-Wusch (NW) y Dynamic Time Warping (DTW), modelos ocultos de Markov (HMM) y clasificador estadístico basado en Máquinas de Vector Soporte (SVM). Al no existir bases de datos públicas de firmas en el aire y con el fin de evaluar los métodos propuestos en esta tesis doctoral, se han capturado dos con distintas características; una con falsificaciones reales a partir del estudio de las grabaciones de usuarios auténticos y otra con muestras de usuarios obtenidas en diferentes sesiones a lo largo del tiempo. Utilizando estas bases de datos se han evaluado una gran cantidad de algoritmos para implementar un sistema de verificación basado en firma en el aire. Esta evaluación se ha realizado de acuerdo con el estándar ISO/IEC 19795, añadiendo el caso de verificación en mundo abierto no incluido en la norma. Además, se han analizado las características que hacen que una firma sea suficientemente segura. Por otro lado, se ha estudiado la permanencia de las firmas en el aire a lo largo del tiempo, proponiendo distintos métodos de actualización, basados en una adaptación dinámica del patrón, para mejorar su rendimiento. Finalmente, se ha implementado un prototipo de la técnica de firma en el aire para teléfonos Android e iOS. Los resultados de esta tesis doctoral han tenido un gran impacto, generando varias publicaciones en revistas internacionales, congresos y libros. La firma en el aire ha sido nombrada también en varias revistas de divulgación, portales de noticias Web y televisión. Además, se han obtenido varios premios en competiciones de ideas innovadoras y se ha firmado un acuerdo de explotación de la tecnología con una empresa extranjera. ABSTRACT This thesis proposes a biometric verification technique on mobile phones consisting on making a signature in the air with the hand holding a mobile phone. The accelerometers integrated in the device capture the movement accelerations, generating three temporal signals that can be used for verification. This thesis suggests several approaches for implementing the verification system, based on the most widely used approaches in handwritten signature biometrics: template matching, with a lot of variations of the Needleman- Wusch (NW) and Dynamic Time Warping (DTW) algorithms, Hidden Markov Models (HMM) and Supported Vector Machines (SVM). As there are no public databases of in-air signatures and with the aim of assessing the proposed methods, there have been captured two databases; one. with real falsification attempts from the study of recordings captured when genuine users made their signatures in front of a camera, and other, with samples obtained in different sessions over a long period of time. These databases have been used to evaluate a lot of algorithms in order to implement a verification system based on in-air signatures. This evaluation has been conducted according to the standard ISO/IEC 19795, adding the open-set verification scenario not included in the norm. In addition, the characteristics of a secure signature are also investigated, as well as the permanence of in-air signatures over time, proposing several updating strategies to improve its performance. Finally, a prototype of in-air signature has been developed for iOS and Android phones. The results of this thesis have achieved a high impact, publishing several articles in SCI journals, conferences and books. The in-air signature deployed in this thesis has been also referred in numerous media. Additionally, this technique has won several awards in the entrepreneurship field and also an exploitation agreement has been signed with a foreign company.
Resumo:
Solar radiation estimates with clear sky models require estimations of aerosol data. The low spatial resolution of current aerosol datasets, with their remarkable drift from measured data, poses a problem in solar resource estimation. This paper proposes a new downscaling methodology by combining support vector machines for regression (SVR) and kriging with external drift, with data from the MACC reanalysis datasets and temperature and rainfall measurements from 213 meteorological stations in continental Spain. The SVR technique was proven efficient in aerosol variable modeling. The Linke turbidity factor (TL) and the aerosol optical depth at 550 nm (AOD 550) estimated with SVR generated significantly lower errors in AERONET positions than MACC reanalysis estimates. The TL was estimated with relative mean absolute error (rMAE) of 10.2% (compared with AERONET), against the MACC rMAE of 18.5%. A similar behavior was seen with AOD 550, estimated with rMAE of 8.6% (compared with AERONET), against the MACC rMAE of 65.6%. Kriging using MACC data as an external drift was found useful in generating high resolution maps (0.05° × 0.05°) of both aerosol variables. We created high resolution maps of aerosol variables in continental Spain for the year 2008. The proposed methodology was proven to be a valuable tool to create high resolution maps of aerosol variables (TL and AOD 550). This methodology shows meaningful improvements when compared with estimated available databases and therefore, leads to more accurate solar resource estimations. This methodology could also be applied to the prediction of other atmospheric variables, whose datasets are of low resolution.
Resumo:
La presente Tesis investiga el campo del reconocimiento automático de imágenes mediante ordenador aplicado al análisis de imágenes médicas en mamografía digital. Hay un interés por desarrollar sistemas de aprendizaje que asistan a los radiólogos en el reconocimiento de las microcalcificaciones para apoyarles en los programas de cribado y prevención del cáncer de mama. Para ello el análisis de las microcalcificaciones se ha revelado como técnica clave de diagnóstico precoz, pero sin embargo el diseño de sistemas automáticos para reconocerlas es complejo por la variabilidad y condiciones de las imágenes mamográficas. En este trabajo se analizan los planteamientos teóricos de diseño de sistemas de reconocimiento de imágenes, con énfasis en los problemas específicos de detección y clasificación de microcalcificaciones. Se ha realizado un estudio que incluye desde las técnicas de operadores morfológicos, redes neuronales, máquinas de vectores soporte, hasta las más recientes de aprendizaje profundo mediante redes neuronales convolucionales, contemplando la importancia de los conceptos de escala y jerarquía a la hora del diseño y sus implicaciones en la búsqueda de la arquitectura de conexiones y capas de la red. Con estos fundamentos teóricos y elementos de diseño procedentes de otros trabajos en este área realizados por el autor, se implementan tres sistemas de reconocimiento de mamografías que reflejan una evolución tecnológica, culminando en un sistema basado en Redes Neuronales Convolucionales (CNN) cuya arquitectura se diseña gracias al análisis teórico anterior y a los resultados prácticos de análisis de escalas llevados a cabo en nuestra base de datos de imágenes. Los tres sistemas se entrenan y validan con la base de datos de mamografías DDSM, con un total de 100 muestras de entrenamiento y 100 de prueba escogidas para evitar sesgos y reflejar fielmente un programa de cribado. La validez de las CNN para el problema que nos ocupa queda demostrada y se propone un camino de investigación para el diseño de su arquitectura. ABSTRACT This Dissertation investigates the field of computer image recognition applied to medical imaging in mammography. There is an interest in developing learning systems to assist radiologists in recognition of microcalcifications to help them in screening programs for prevention of breast cancer. Analysis of microcalcifications has emerged as a key technique for early diagnosis of breast cancer, but the design of automatic systems to recognize them is complicated by the variability and conditions of mammographic images. In this Thesis the theoretical approaches to design image recognition systems are discussed, with emphasis on the specific problems of detection and classification of microcalcifications. Our study includes techniques ranging from morphological operators, neural networks and support vector machines, to the most recent deep convolutional neural networks. We deal with learning theory by analyzing the importance of the concepts of scale and hierarchy at the design stage and its implications in the search for the architecture of connections and network layers. With these theoretical facts and design elements coming from other works in this area done by the author, three mammogram recognition systems which reflect technological developments are implemented, culminating in a system based on Convolutional Neural Networks (CNN), whose architecture is designed thanks to the previously mentioned theoretical study and practical results of analysis conducted on scales in our image database. All three systems are trained and validated against the DDSM mammographic database, with a total of 100 training samples and 100 test samples chosen to avoid bias and stand for a real screening program. The validity of the CNN approach to the problem is demonstrated and a research way to help in designing the architecture of these networks is proposed.
Resumo:
The polypeptide backbones and side chains of proteins are constantly moving due to thermal motion and the kinetic energy of the atoms. The B-factors of protein crystal structures reflect the fluctuation of atoms about their average positions and provide important information about protein dynamics. Computational approaches to predict thermal motion are useful for analyzing the dynamic properties of proteins with unknown structures. In this article, we utilize a novel support vector regression (SVR) approach to predict the B-factor distribution (B-factor profile) of a protein from its sequence. We explore schemes for encoding sequences and various settings for the parameters used in SVR. Based on a large dataset of high-resolution proteins, our method predicts the B-factor distribution with a Pearson correlation coefficient (CC) of 0.53. In addition, our method predicts the B-factor profile with a CC of at least 0.56 for more than half of the proteins. Our method also performs well for classifying residues (rigid vs. flexible). For almost all predicted B-factor thresholds, prediction accuracies (percent of correctly predicted residues) are greater than 70%. These results exceed the best results of other sequence-based prediction methods. (C) 2005 Wiley-Liss, Inc.
Resumo:
Motivation: Targeting peptides direct nascent proteins to their specific subcellular compartment. Knowledge of targeting signals enables informed drug design and reliable annotation of gene products. However, due to the low similarity of such sequences and the dynamical nature of the sorting process, the computational prediction of subcellular localization of proteins is challenging. Results: We contrast the use of feed forward models as employed by the popular TargetP/SignalP predictors with a sequence-biased recurrent network model. The models are evaluated in terms of performance at the residue level and at the sequence level, and demonstrate that recurrent networks improve the overall prediction performance. Compared to the original results reported for TargetP, an ensemble of the tested models increases the accuracy by 6 and 5% on non-plant and plant data, respectively.
Resumo:
In this study, we propose a novel method to predict the solvent accessible surface areas of transmembrane residues. For both transmembrane alpha-helix and beta-barrel residues, the correlation coefficients between the predicted and observed accessible surface areas are around 0.65. On the basis of predicted accessible surface areas, residues exposed to the lipid environment or buried inside a protein can be identified by using certain cutoff thresholds. We have extensively examined our approach based on different definitions of accessible surface areas and a variety of sets of control parameters. Given that experimentally determining the structures of membrane proteins is very difficult and membrane proteins are actually abundant in nature, our approach is useful for theoretically modeling membrane protein tertiary structures, particularly for modeling the assembly of transmembrane domains. This approach can be used to annotate the membrane proteins in proteomes to provide extra structural and functional information.
Resumo:
Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.
Resumo:
We apply methods of Statistical Mechanics to study the generalization performance of Support vector Machines in large data spaces.
Resumo:
We propose a hybrid generative/discriminative framework for semantic parsing which combines the hidden vector state (HVS) model and the hidden Markov support vector machines (HM-SVMs). The HVS model is an extension of the basic discrete Markov model in which context is encoded as a stack-oriented state vector. The HM-SVMs combine the advantages of the hidden Markov models and the support vector machines. By employing a modified K-means clustering method, a small set of most representative sentences can be automatically selected from an un-annotated corpus. These sentences together with their abstract annotations are used to train an HVS model which could be subsequently applied on the whole corpus to generate semantic parsing results. The most confident semantic parsing results are selected to generate a fully-annotated corpus which is used to train the HM-SVMs. The proposed framework has been tested on the DARPA Communicator Data. Experimental results show that an improvement over the baseline HVS parser has been observed using the hybrid framework. When compared with the HM-SVMs trained from the fully-annotated corpus, the hybrid framework gave a comparable performance with only a small set of lightly annotated sentences. © 2008. Licensed under the Creative Commons.
Resumo:
Web APIs have gained increasing popularity in recent Web service technology development owing to its simplicity of technology stack and the proliferation of mashups. However, efficiently discovering Web APIs and the relevant documentations on the Web is still a challenging task even with the best resources available on the Web. In this paper we cast the problem of detecting the Web API documentations as a text classification problem of classifying a given Web page as Web API associated or not. We propose a supervised generative topic model called feature latent Dirichlet allocation (feaLDA) which offers a generic probabilistic framework for automatic detection of Web APIs. feaLDA not only captures the correspondence between data and the associated class labels, but also provides a mechanism for incorporating side information such as labelled features automatically learned from data that can effectively help improving classification performance. Extensive experiments on our Web APIs documentation dataset shows that the feaLDA model outperforms three strong supervised baselines including naive Bayes, support vector machines, and the maximum entropy model, by over 3% in classification accuracy. In addition, feaLDA also gives superior performance when compared against other existing supervised topic models.
Resumo:
There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.
Resumo:
The binding between peptide epitopes and major histocompatibility complex (MHC) proteins is a major event in the cellular immune response. Accurate prediction of the binding between short peptides and class I or class II MHC molecules is an important task in immunoinformatics. SVRMHC which is a novel method to model peptide-MHC binding affinities based on support rector machine regression (SVR) is described in this chapter. SVRMHC is among a small handful of quantitative modeling methods that make predictions about precise binding affinities between a peptide and an MHC molecule. As a kernel-based learning method, SVRMHC has rendered models with demonstrated appealing performance in the practice of modeling peptide-MHC binding.
Resumo:
Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.
Resumo:
Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.