776 resultados para Machine learning methods
Resumo:
A resistência a múltiplos fármacos é um grande problema na terapia anti-cancerígena, sendo a glicoproteína-P (P-gp) uma das responsáveis por esta resistência. A realização deste trabalho incidiu principalmente no desenvolvimento de modelos matemáticos/estatísticos e “químicos”. Para os modelos matemáticos/estatísticos utilizamos métodos de Machine Learning como o Support Vector Machine (SVM) e o Random Forest, (RF) em relação aos modelos químicos utilizou-se farmacóforos. Os métodos acima mencionados foram aplicados a diversas proteínas P-gp, p53 e complexo p53-MDM2, utilizando duas famílias: as pifitrinas para a p53 e flavonóides para P-gp e, em menor medida, um grupo diversificado de moléculas de diversas famílias químicas. Nos modelos obtidos pelo SVM quando aplicados à P-gp e à família dos flavonóides, obtivemos bons valores através do kernel Radial Basis Function (RBF), com precisão de conjunto de treino de 94% e especificidade de 96%. Quanto ao conjunto de teste com previsão de 70% e especificidade de 67%, sendo que o número de falsos negativos foi o mais baixo comparativamente aos restantes kernels. Aplicando o RF à família dos flavonóides verificou-se que o conjunto de treino apresenta 86% de precisão e uma especificidade de 90%, quanto ao conjunto de teste obtivemos uma previsão de 70% e uma especificidade de 60%, existindo a particularidade de o número de falsos negativos ser o mais baixo. Repetindo o procedimento anterior (RF) e utilizando um total de 63 descritores, os resultados apresentaram valores inferiores obtendo-se para o conjunto de treino 79% de precisão e 82% de especificidade. Aplicando o modelo ao conjunto de teste obteve-se 70% de previsão e 60% de especificidade. Comparando os dois métodos, escolhemos o método SVM com o kernel RBF como modelo que nos garante os melhores resultados de classificação. Aplicamos o método SVM à P-gp e a um conjunto de moléculas não flavonóides que são transportados pela P-gp, obteve-se bons valores através do kernel RBF, com precisão de conjunto de treino de 95% e especificidade de 93%. Quanto ao conjunto de teste, obtivemos uma previsão de 70% e uma especificidade de 69%, existindo a particularidade de o número de falsos negativos ser o mais baixo. Aplicou-se o método do farmacóforo a três alvos, sendo estes, um conjunto de inibidores flavonóides e de substratos não flavonóides para a P-gp, um grupo de piftrinas para a p53 e um conjunto diversificado de estruturas para a ligação da p53-MDM2. Em cada um dos quatro modelos de farmacóforos obtidos identificou-se três características, sendo que as características referentes ao anel aromático e ao dador de ligações de hidrogénio estão presentes em todos os modelos obtidos. Realizando o rastreio em diversas bases de dados utilizando os modelos, obtivemos hits com uma grande diversidade estrutural.
Resumo:
This thesis presents a low cost non-intrusive home energy monitor built on top of Non-Intrusive Load Monitoring (NILM) concepts and techniques. NILM solutions are already considered low cost alternatives to the big majority of existing commercial energy monitors but the goal here is to make its cost even lower by using a mini netbook as a whole in one solution. The mini netbook is installed in the homes main circuit breaker and computes power consumption by reading current and voltage from the built-in sound card. At the same time, feedback to the users is provided using the 11’’ LCD screen as well as other built-in I/O modules. Our meter is also capable of detecting changes in power and tries to find out which appliance lead to that change and it is being used as part of an eco-feedback platform that was build to study the long terms of energy eco-feedback in individuals. In this thesis the steps that were taken to come up with such a system are presented, from the basics of AC power measurements to the implementation of an event detector and classifier that was used to disaggregate the power load. In the last chapter results from some validation tests that have been performed are presented in order to validate the experiment. It is believed that such a system will not only be important as an energy monitor, but also as an open system than can be easily changed to accommodate and test new or existing nonintrusive load monitoring techniques.
Resumo:
O presente trabalho teve como objetivo determinar quais variáveis dimensionais da folha são mais adequadas para utilização na estimativa da área foliar do antúrio (Anthurium andraeanum), cv. Apalai, por meio de equação de regressão linear, e comparar o desempenho de diferentes funções de regressão obtidas com o uso de aprendizado de máquina (AM). A variável que melhor estimou a área foliar foi o produto das dimensões lineares (comprimento e largura), CxL, sendo a equação proposta Af = 0.9672 *C x L, com coeficiente de determinação (R²) de 0,99. Verificou-se, também, com o uso de AM, que as funções lineares são mais adequadas para a estimação da área foliar dessa espécie vegetal.
Resumo:
The industrial automation is directly linked to the development of information tecnology. Better hardware solutions, as well as improvements in software development methodologies make possible the rapid growth of the productive process control. In this thesis, we propose an architecture that will allow the joining of two technologies in hardware (industrial network) and software field (multiagent systems). The objective of this proposal is to join those technologies in a multiagent architecture to allow control strategies implementations in to field devices. With this, we intend develop an agents architecture to detect and solve problems which may occur in the industrial network environment. Our work ally machine learning with industrial context, become proposed multiagent architecture adaptable to unfamiliar or unexpected production environment. We used neural networks and presented an allocation strategies of these networks in industrial network field devices. With this we intend to improve decision support at plant level and allow operations human intervention independent
Resumo:
The Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function
Resumo:
Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification
Resumo:
This is work itself insert in the mathematics education field of the youth and adult education to aim to practitioners of the educational action into the mathematics area performing to with this is teaching kind, adopting to as parameter the Mathematics Molding approach. The motive of the research is to draw up a application proposal of the molding mathematics as teaching and learning geometry alternative in the youth and adult education. The research it develops in three class of the third level (series 5th and 6th) of he youth and adults education in the one school municipal from the Natal outskirts. Its have qualitative nature with participating observation approach, once performing to directly in to research environment as a mathematics teacher of those same classes. We are used questionnaires, lesson notes and analyses of the officials documents as an basis of claim instruments. The results indicates that activity used the mathematic moldings were appreciated the savoir-faire of the student in to knowledge construction process, when search develop to significant learning methods, helping to student build has mathematics connections with other knowledge areas and inside mathematics himself, so much that enlarges your understanding and assist has in your participation in the other socials place, over there propitiate to change in student and teacher posture with relation to mathematic classroom dynamics
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria
Resumo:
The identification of genes essential for survival is important for the understanding of the minimal requirements for cellular life and for drug design. As experimental studies with the purpose of building a catalog of essential genes for a given organism are time-consuming and laborious, a computational approach which could predict gene essentiality with high accuracy would be of great value. We present here a novel computational approach, called NTPGE (Network Topology-based Prediction of Gene Essentiality), that relies on the network topology features of a gene to estimate its essentiality. The first step of NTPGE is to construct the integrated molecular network for a given organism comprising protein physical, metabolic and transcriptional regulation interactions. The second step consists in training a decision-tree-based machine-learning algorithm on known essential and non-essential genes of the organism of interest, considering as learning attributes the network topology information for each of these genes. Finally, the decision-tree classifier generated is applied to the set of genes of this organism to estimate essentiality for each gene. We applied the NTPGE approach for discovering the essential genes in Escherichia coli and then assessed its performance. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Redes neurais pulsadas - redes que utilizam uma codificação temporal da informação - têm despontado como uma promissora abordagem dentro do paradigma conexionista, emergente da ciência cognitiva. Um desses novos modelos é a rede neural pulsada com função de base radial, que é capaz de armazenar informação nos tempos de atraso axonais dos neurônios. Um algoritmo de aprendizado foi aplicado com sucesso nesta rede pulsada, que se mostrou capaz de mapear uma seqüência de pulsos de entrada em uma seqüência de pulsos de saída. Mais recentemente, um método baseado no uso de campos receptivos gaussianos foi proposto para codificar dados constantes em uma seqüência de pulsos temporais. Este método tornou possível a essa rede lidar com dados computacionais. O processo de aprendizado desta nova rede não se encontra plenamente compreendido e investigações mais profundas são necessárias para situar este modelo dentro do contexto do aprendizado de máquinas e também para estabelecer as habilidades e limitações desta rede. Este trabalho apresenta uma investigação desse novo classificador e um estudo de sua capacidade de agrupar dados em três dimensões, particularmente procurando estabelecer seus domínios de aplicação e horizontes no campo da visão computacional.
Resumo:
As novas demandas sociais e as diretrizes curriculares brasileiras para os cursos de odontologia colocam desafios à prática docente nas instituições de educação superior. Nesse contexto, investigam-se as concepções de qualidade de ensino universitário de professores que atuam como coordenadores de graduação nas faculdades de odontologia do Estado de São Paulo que possuem pós-graduação stricto-sensu, para refletir sobre os desafios da formação docente na área. Como instrumentos de levantamento de dados utilizou-se questionário, contendo perguntas abertas e fechadas e entrevista semi-estruturada, organizada para possibilitar o aprofundamento da discussão. Os dados foram descritos e discutidos mediante análise quantitativa e qualitativa, a partir das três dimensões da prática docente analisadas por Cunha (1995): político-estrutural, curricular e pedagógica. Para este artigo, focalizaram-se apenas os aspectos da dimensão pedagógica, na qual os pontos que expressam posturas mais contraditórias referem-se a métodos de ensino-aprendizagem, participação do aluno e tutoria. Os resultados apontam para concepções de ensino-aprendizagem que oscilam entre modelos tradicionais e inovadores, sinalizando pontos de conflito em relação a paradigmas que se articulam diretamente a questões curriculares e político-estruturais.
Resumo:
Interactive visual representations complement traditional statistical and machine learning techniques for data analysis, allowing users to play a more active role in a knowledge discovery process and making the whole process more understandable. Though visual representations are applicable to several stages of the knowledge discovery process, a common use of visualization is in the initial stages to explore and organize a sometimes unknown and complex data set. In this context, the integrated and coordinated - that is, user actions should be capable of affecting multiple visualizations when desired - use of multiple graphical representations allows data to be observed from several perspectives and offers richer information than isolated representations. In this paper we propose an underlying model for an extensible and adaptable environment that allows independently developed visualization components to be gradually integrated into a user configured knowledge discovery application. Because a major requirement when using multiple visual techniques is the ability to link amongst them, so that user actions executed on a representation propagate to others if desired, the model also allows runtime configuration of coordinated user actions over different visual representations. We illustrate how this environment is being used to assist data exploration and organization in a climate classification problem.