931 resultados para classification of Bulgarian adjectives
Resumo:
The analysis of bacterial genomes for epidemiological purposes often results in the production of a banding profile of DNA fragments characteristic of the genome under investigation. These may be produced using various methods, many of which involve the cutting or amplification of DNA into defined and reproducible characteristic fragments. It is frequently of interest to enquire whether the bacterial isolates are naturally classifiable into distinct groups based on their DNA profiles. A major problem with this approach is whether classification or clustering of the data is even appropriate. It is always possible to classify such data but it does not follow that the strains they represent are ‘actually’ classifiable into well-defined separate parts. Hence, the act of classification does not in itself answer the question: do the strains consist of a number of different distinct groups or species or do they merge imperceptibly into one another because DNA profiles vary continuously? Nevertheless, we may still wish to classify the data for ‘convenience’ even though strains may vary continuously, and such a classification has been called a ‘dissection’. This Statnote discusses the use of classificatory methods in analyzing the DNA profiles from a sample of bacterial isolates.
Resumo:
We address the important bioinformatics problem of predicting protein function from a protein's primary sequence. We consider the functional classification of G-Protein-Coupled Receptors (GPCRs), whose functions are specified in a class hierarchy. We tackle this task using a novel top-down hierarchical classification system where, for each node in the class hierarchy, the predictor attributes to be used in that node and the classifier to be applied to the selected attributes are chosen in a data-driven manner. Compared with a previous hierarchical classification system selecting classifiers only, our new system significantly reduced processing time without significantly sacrificing predictive accuracy.
Resumo:
This article categorises manufacturing strategy design processes and presents the characteristics of resulting strategies. This work will therefore assist practitioners to appreciate the implications of planning activities. The article presents a framework for classifying manufacturing strategy processes and the resulting strategies. Each process and respective strategy is then considered in detail. In this consideration the preferred approach is presented for formulating a world class manufacturing strategy. Finally, conclusions and recommendations for further work are given.
Resumo:
The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.
Resumo:
There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.
Resumo:
Despite the large body of research regarding the role of memory in OCD, the results are described as mixed at best (Hermans et al., 2008). For example, inconsistent findings have been reported with respect to basic capacity, intact verbal, and generally affected visuospatial memory. We suggest that this is due to the traditional pursuit of OCD memory impairment as one of the general capacity and/or domain specificity (visuospatial vs. verbal). In contrast, we conclude from our experiments (i.e., Harkin & Kessler, 2009, 2011; Harkin, Rutherford, & Kessler, 2011) and recent literature (e.g., Greisberg & McKay, 2003) that OCD memory impairment is secondary to executive dysfunction, and more specifically we identify three common factors (EBL: Executive-functioning efficiency, Binding complexity, and memory Load) that we generalize to 58 experimental findings from 46 OCD memory studies. As a result we explain otherwise inconsistent research – e.g., intact vs. deficient verbal memory – that are difficult to reconcile within a capacity or domain specific perspective. We conclude by discussing the relationship between our account and others', which in most cases is complementary rather than contradictory.
Resumo:
MOTIVATION: G protein-coupled receptors (GPCRs) play an important role in many physiological systems by transducing an extracellular signal into an intracellular response. Over 50% of all marketed drugs are targeted towards a GPCR. There is considerable interest in developing an algorithm that could effectively predict the function of a GPCR from its primary sequence. Such an algorithm is useful not only in identifying novel GPCR sequences but in characterizing the interrelationships between known GPCRs. RESULTS: An alignment-free approach to GPCR classification has been developed using techniques drawn from data mining and proteochemometrics. A dataset of over 8000 sequences was constructed to train the algorithm. This represents one of the largest GPCR datasets currently available. A predictive algorithm was developed based upon the simplest reasonable numerical representation of the protein's physicochemical properties. A selective top-down approach was developed, which used a hierarchical classifier to assign sequences to subdivisions within the GPCR hierarchy. The predictive performance of the algorithm was assessed against several standard data mining classifiers and further validated against Support Vector Machine-based GPCR prediction servers. The selective top-down approach achieves significantly higher accuracy than standard data mining methods in almost all cases.
Resumo:
Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.
Resumo:
The paper presents our considerations related to the creation of a digital corpus of Bulgarian dialects. The dialectological archive of Bulgarian language consists of more than 250 audio tapes. All tapes were recorded between 1955 and 1965 in the course of regular dialectological expeditions throughout the country. The records typically contain interviews with inhabitants of small villages in Bulgaria. The topics covered are usually related to such issues as birth, everyday life, marriage, family relationship, death, etc. Only a few tapes contain folk songs from different regions of the country. Taking into account the progressive deterioration of the magnetic media and the realistic prospects of data loss, the Institute for Bulgarian Language at the Academy of Sciences launched in 1997 a project aiming at restoration and digital preservation of the dialectological archive. Within the framework of this project more than the half of the records was digitized, de-noised and stored on digital recording media. Since then restoration and digitization activities are done in the Institute on a regular basis. As a result a large collection of sound files has been gathered. Our further efforts are aimed at the creation of a digital corpus of Bulgarian dialects, which will be made available for phonological and linguistic research. Such corpora typically include besides the sound files two basic elements: a transcription, aligned with the sound file, and a set of standardized metadata that defines the corpus. In our work we will present considerations on how these tasks could be realized in the case of the corpus of Bulgarian dialects. Our suggestions will be based on a comparative analysis of existing methods and techniques to build such corpora, and by selecting the ones that fit closer to the particular needs. Our experience can be used in similar institutions storing folklore archives, history related spoken records etc.
Resumo:
Preserving and presenting the Bulgarian folklore heritage is a long-term commitment of scholars and researchers working in many areas. This article presents ontological model of the Bulgarian folklore knowledge, exploring knowledge technologies for presenting the semantics of the phenomena of our traditional culture. This model is a step to the development of the digital library for the “Bulgarian Folklore Heritage” virtual exposition which is a part of the “Knowledge Technologies for Creation of Digital Presentation and Significant Repositories of Folklore Heritage” project.
Resumo:
The paper presents an ongoing effort aimed at building an electronic archive of documents issued by the Bulgarian Ministry of Education in the 40ies and 50ies of the 20th century. These funds are stored in the Archive of the Ministry of the People’s Education within the State Archival Fund of the General Department of Archives at the Council of Ministers of Bulgaria. Our basic concern is not the digitization process per se, but the subsequent organization of the archive in a clear and easily-searchable way which would allow various types of users to get access to the documents of interest to them. Here we present the variety of the documents which are stored in the archival collection, and suggestions on their electronic organization. We suggest using ontologies- based presentation of the archive. The basic benefit of this approach is the possibility to search the collection according to the stored content categories.
Resumo:
Technology of classification of electronic documents based on the theory of disturbance of pseudoinverse matrices was proposed.
Resumo:
* This study was supported in part by the Natural Sciences and Engineering Research Council of Canada, and by the Gastrointestinal Motility Laboratory (University of Alberta Hospitals) in Edmonton, Alberta, Canada.
Resumo:
This paper deals with the classification of news items in ePaper, a prototype system of a future personalized newspaper service on a mobile reading device. The ePaper system aggregates news items from various news providers and delivers to each subscribed user (reader) a personalized electronic newspaper, utilizing content-based and collaborative filtering methods. The ePaper can also provide users "standard" (i.e., not personalized) editions of selected newspapers, as well as browsing capabilities in the repository of news items. This paper concentrates on the automatic classification of incoming news using hierarchical news ontology. Based on this classification on one hand, and on the users' profiles on the other hand, the personalization engine of the system is able to provide a personalized paper to each user onto her mobile reading device.
Resumo:
A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.