155 resultados para Classification Rules
em Université de Lausanne, Switzerland
Resumo:
The increase of publicly available sequencing data has allowed for rapid progress in our understanding of genome composition. As new information becomes available we should constantly be updating and reanalyzing existing and newly acquired data. In this report we focus on transposable elements (TEs) which make up a significant portion of nearly all sequenced genomes. Our ability to accurately identify and classify these sequences is critical to understanding their impact on host genomes. At the same time, as we demonstrate in this report, problems with existing classification schemes have led to significant misunderstandings of the evolution of both TE sequences and their host genomes. In a pioneering publication Finnegan (1989) proposed classifying all TE sequences into two classes based on transposition mechanisms and structural features: the retrotransposons (class I) and the DNA transposons (class II). We have retraced how ideas regarding TE classification and annotation in both prokaryotic and eukaryotic scientific communities have changed over time. This has led us to observe that: (1) a number of TEs have convergent structural features and/or transposition mechanisms that have led to misleading conclusions regarding their classification, (2) the evolution of TEs is similar to that of viruses by having several unrelated origins, (3) there might be at least 8 classes and 12 orders of TEs including 10 novel orders. In an effort to address these classification issues we propose: (1) the outline of a universal TE classification, (2) a set of methods and classification rules that could be used by all scientific communities involved in the study of TEs, and (3) a 5-year schedule for the establishment of an International Committee for Taxonomy of Transposable Elements (ICTTE).
Resumo:
HAMAP (High-quality Automated and Manual Annotation of Proteins-available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorithm.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
To compare the impact of meeting specific classification criteria [modified New York (mNY), European Spondyloarthropathy Study Group (ESSG), and Assessment of SpondyloArthritis international Society (ASAS) criteria] on anti-tumor necrosis factor (anti-TNF) drug retention, and to determine predictive factors of better drug survival. All patients fulfilling the ESSG criteria for axial spondyloarthritis (SpA) with available data on the axial ASAS and mNY criteria, and who had received at least one anti-TNF treatment were retrospectively retrieved in a single academic institution in Switzerland. Drug retention was computed using survival analysis (Kaplan-Meier), adjusted for potential confounders. Of the 137 patients classified as having axial SpA using the ESSG criteria, 112 also met the ASAS axial SpA criteria, and 77 fulfilled the mNY criteria. Drug retention rates at 12 and 24 months for the first biologic therapy were not significantly different between the diagnostic groups. Only the small ASAS non-classified axial SpA group (25 patients) showed a nonsignificant trend toward shorter drug survival. Elevated CRP level, but not the presence of bone marrow edema on magnetic resonance imaging (MRI) scans, was associated with significantly better drug retention (OR 7.9, ICR 4-14). In this cohort, anti-TNF drug survival was independent of the classification criteria. Elevated CRP level, but not positive MRI, was associated with better drug retention.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
To resolve the share of limited resources, animals often compete through exchange of signals about their relative motivation to compete. When two competitors are similarly motivated, the resolution of conflicts may be achieved in the course of an interactive process. In barn owls, Tyto alba, in which siblings vocally compete during the prolonged absence of parents over access to the next delivered food item, we investigated what governs the decision to leave or enter a contest, and at which level. Siblings alternated periods during which one of the two individuals vocalized more than the other. Individuals followed turn-taking rules to interrupt each other and momentarily dominate the vocal competition. These social rules were weakly sensitive to hunger level and age hierarchy. Hence, the investment in a conflict is determined not only by need and resource-holding potential, but also by social interactions. The use of turn-taking rules governing individual vocal investment has rarely been shown in a competitive context. We hypothesized that these rules would allow individuals to remain alert to one another's motivation while maintaining the cost of vocalizing at the lowest level.
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Resumo:
Résumé de la thèse L'évolution des systèmes policiers donne une place prépondérante à l'information et au renseignement. Cette transformation implique de développer et de maintenir un ensemble de processus permanent d'analyse de la criminalité, en particulier pour traiter des événements répétitifs ou graves. Dans une organisation aux ressources limitées, le temps consacré au recueil des données, à leur codification et intégration, diminue le temps disponible pour l'analyse et la diffusion de renseignements. Les phases de collecte et d'intégration restent néanmoins indispensables, l'analyse n'étant pas possible sur des données volumineuses n'ayant aucune structure. Jusqu'à présent, ces problématiques d'analyse ont été abordées par des approches essentiellement spécialisées (calculs de hot-sports, data mining, ...) ou dirigées par un seul axe (par exemple, les sciences comportementales). Cette recherche s'inscrit sous un angle différent, une démarche interdisciplinaire a été adoptée. L'augmentation continuelle de la quantité de données à analyser tend à diminuer la capacité d'analyse des informations à disposition. Un bon découpage (classification) des problèmes rencontrés permet de délimiter les analyses sur des données pertinentes. Ces classes sont essentielles pour structurer la mémoire du système d'analyse. Les statistiques policières de la criminalité devraient déjà avoir répondu à ces questions de découpage de la délinquance (classification juridique). Cette décomposition a été comparée aux besoins d'un système de suivi permanent dans la criminalité. La recherche confirme que nos efforts pour comprendre la nature et la répartition du crime se butent à un obstacle, à savoir que la définition juridique des formes de criminalité n'est pas adaptée à son analyse, à son étude. Depuis près de vingt ans, les corps de police de Suisse romande utilisent et développent un système de classification basé sur l'expérience policière (découpage par phénomène). Cette recherche propose d'interpréter ce système dans le cadre des approches situationnelles (approche théorique) et de le confronter aux données « statistiques » disponibles pour vérifier sa capacité à distinguer les formes de criminalité. La recherche se limite aux cambriolages d'habitations, un délit répétitif fréquent. La théorie des opportunités soutien qu'il faut réunir dans le temps et dans l'espace au minimum les trois facteurs suivants : un délinquant potentiel, une cible intéressante et l'absence de gardien capable de prévenir ou d'empêcher le passage à l'acte. Ainsi, le délit n'est possible que dans certaines circonstances, c'est-à-dire dans un contexte bien précis. Identifier ces contextes permet catégoriser la criminalité. Chaque cas est unique, mais un groupe de cas montre des similitudes. Par exemple, certaines conditions avec certains environnements attirent certains types de cambrioleurs. Deux hypothèses ont été testées. La première est que les cambriolages d'habitations ne se répartissent pas uniformément dans les classes formées par des « paramètres situationnels » ; la deuxième que des niches apparaissent en recoupant les différents paramètres et qu'elles correspondent à la classification mise en place par la coordination judiciaire vaudoise et le CICOP. La base de données vaudoise des cambriolages enregistrés entre 1997 et 2006 par la police a été utilisée (25'369 cas). Des situations spécifiques ont été mises en évidence, elles correspondent aux classes définies empiriquement. Dans une deuxième phase, le lien entre une situation spécifique et d'activité d'un auteur au sein d'une même situation a été vérifié. Les observations réalisées dans cette recherche indiquent que les auteurs de cambriolages sont actifs dans des niches. Plusieurs auteurs sériels ont commis des délits qui ne sont pas dans leur niche, mais le nombre de ces infractions est faible par rapport au nombre de cas commis dans la niche. Un système de classification qui correspond à des réalités criminelles permet de décomposer les événements et de mettre en place un système d'alerte et de suivi « intelligent ». Une nouvelle série dans un phénomène sera détectée par une augmentation du nombre de cas de ce phénomène, en particulier dans une région et à une période donnée. Cette nouvelle série, mélangée parmi l'ensemble des délits, ne serait pas forcément détectable, en particulier si elle se déplace. Finalement, la coopération entre les structures de renseignement criminel opérationnel en Suisse romande a été améliorée par le développement d'une plateforme d'information commune et le système de classification y a été entièrement intégré.