873 resultados para Support Vector Machines and Naive Bayes Classifier
Resumo:
Resumo:
Phosphorylation is amongst the most crucial and well-studied post-translational modifications. It is involved in multiple cellular processes which makes phosphorylation prediction vital for understanding protein functions. However, wet-lab techniques are labour and time intensive. Thus, computational tools are required for efficiency. This project aims to provide a novel way to predict phosphorylation sites from protein sequences by adding flexibility and Sezerman Grouping amino acid similarity measure to previous methods, as discovering new protein sequences happens at a greater rate than determining protein structures. The predictor – NOPAY - relies on Support Vector Machines (SVMs) for classification. The features include amino acid encoding, amino acid grouping, predicted secondary structure, predicted protein disorder, predicted protein flexibility, solvent accessibility, hydrophobicity and volume. As a result, we have managed to improve phosphorylation prediction accuracy for Homo sapiens by 3% and 6.1% for Mus musculus. Sensitivity at 99% specificity was also increased by 6% for Homo sapiens and for Mus musculus by 5% on independent test sets. In this study, we have managed to increase phosphorylation prediction accuracy for Homo sapiens and Mus musculus. When there is enough data, future versions of the software may also be able to predict other organisms.
Resumo:
Second order matrix equations arise in the description of real dynamical systems. Traditional modal control approaches utilise the eigenvectors of the undamped system to diagonalise the system matrices. A regrettable consequence of this approach is the discarding of residual o-diagonal terms in the modal damping matrix. This has particular importance for systems containing skew-symmetry in the damping matrix which is entirely discarded in the modal damping matrix. In this paper a method to utilise modal control using the decoupled second order matrix equations involving nonclassical damping is proposed. An example of modal control sucessfully applied to a rotating system is presented in which the system damping matrix contains skew-symmetric components.
Resumo:
Second order matrix equations arise in the description of real dynamical systems. Traditional modal control approaches utilise the eigenvectors of the undamped system to diagonalise the system matrices. A regrettable consequence of this approach is the discarding of residual off-diagonal terms in the modal damping matrix. This has particular importance for systems containing skew-symmetry in the damping matrix which is entirely discarded in the modal damping matrix. In this paper a method to utilise modal control using the decoupled second order matrix equations involving non-classical damping is proposed. An example of modal control successfully applied to a rotating system is presented in which the system damping matrix contains skew-symmetric components.
Resumo:
Virtual screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface in order to find new hotspots, where ligands might potentially interact with, and which is implemented in last generation massively parallel GPU hardware, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods and concretely BINDSURF is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of the scoring functions used in BINDSURF we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, being this information exploited afterwards to improve BINDSURF VS predictions.
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of scoring functions used in most VS methods we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, this information being exploited afterwards to improve VS predictions.
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. Most VS methods suppose a unique binding site for the target, but it has been demonstrated that diverse ligands interact with unrelated parts of the target and many VS methods do not take into account this relevant fact. This problem is circumvented by a novel VS methodology named BINDSURF that scans the whole protein surface to find new hotspots, where ligands might potentially interact with, and which is implemented in massively parallel Graphics Processing Units, allowing fast processing of large ligand databases. BINDSURF can thus be used in drug discovery, drug design, drug repurposing and therefore helps considerably in clinical research. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to solve this problem, we propose a novel approach where neural networks are trained with databases of known active (drugs) and inactive compounds, and later used to improve VS predictions.
Resumo:
International audience
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015.
Resumo:
The eggs of the dengue fever vector Aedes aegypti possess the ability to undergo an extended quiescence period hosting a fully developed first instar larvae within its chorion. As a result of this life history stage, pharate larvae can withstand months of dormancy inside the egg where they depend on stored reserves of maternal origin. This adaptation known as pharate first instar quiescence, allows A. aegypti to cope with fluctuations in water availability. An examination of this fundamental adaptation has shown that there are trade-offs associated with it. Aedes aegypti mosquitoes are frequently associated with urban habitats that may contain metal pollution. My research has demonstrated that the duration of this quiescence and the extent of nutritional depletion associated with it affects the physiology and survival of larvae that hatch in a suboptimal habitat; nutrient reserves decrease during pharate first instar quiescence and alter subsequent larval and adult fitness. The duration of quiescence compromises metal tolerance physiology and is coupled to a decrease in metallothionein mRNA levels. My findings also indicate that even low levels of environmentally relevant larval metal stress alter the parameters that determine vector capacity. My research has also demonstrated that extended pharate first instar quiescence can elicit a plastic response resulting in an adult phenotype distinct from adults reared from short quiescence eggs. Extended pharate first instar quiescence affects the performance and reproductive fitness of the adult female mosquito as well as the nutritional status of its progeny via maternal effects in an adaptive manner, i.e., anticipatory phenotypic plasticity results as a consequence of the duration of pharate first instar quiescence and alternative phenotypes may exist for this mosquito with quiescence serving as a cue possibly signaling the environmental conditions that follow a dry period. M findings may explain, in part, A. aegypti’s success as a vector and its geographic distribution and have implications for its vector capacity and control.
Resumo:
Thesis (Master, Education) -- Queen's University, 2016-08-29 15:56:53.748
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.