923 resultados para Teorema de Bayes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste material é apresentado primeiramente um teorema muito importante que é o teorema do valor médio, com exemplos de aplicação. Na sequência temos a definição de antiderivada ou primitiva de uma função. No segundo tópico segue a definição de integral indefinida e a apresentação de algumas integrais importantes e básicas. Uma tabela de integrais básicas também é disponibilizada. Finalizando, foram listados propriedades e exemplos de integrais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Na este capítulo, é apresentada inicialmente a definição formal de integral, mostrando que a mesma calcula a área sob o gráfico de uma função em um intervalo [a, b]. Na sequência são listadas as propriedades das integrais sem demonstração e também algumas convenções que serão utilizadas. A unidade também traz o teorema fundamental do cálculo e exemplo do cálculo de uma área usando a integral. Também são apresentadas as técnicas de integração por substituição e a técnica de integração por partes, além de vários exemplos resolvidos passo a passo. Finalizando, temos algumas integrais envolvendo funções trigonométricas, fórmulas de redução ou de recorrência e substituições trigonométricas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Capítulo 8 do Livro Noções de "Cálculo Diferencial e Integral para Tecnólogos"

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Versão com menu acessível para leitores de tela e vídeo com audiodescrição.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Versão com menu acessível para leitores de tela e vídeo com audiodescrição.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A videoaula traz o teorema da divisão no contexto dos números inteiros e o Máximo Divisor Comum (MDC). Destaca ainda o algoritmo de Euclides, sendo este usado para cálculo do máximo divisor comum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trigonometry, branch of mathematics related to the study of triangles, developed from practical needs, especially relating to astronomy, Surveying and Navigation. Johann Müller, the Regiomontanus (1436-1476) mathematician and astronomer of the fifteenth century played an important role in the development of this science. His work titled De Triangulis Omnimodis Libri Quinque written around 1464, and published posthumously in 1533, presents the first systematic exposure of European plane and spherical trigonometry, a treatment independent of astronomy. In this study we present a description, translation and analysis of some aspects of this important work in the history of trigonometry. Therefore, the translation was performed using a version of the book Regiomontanus on Triangles of Barnabas Hughes, 1967. In it you will find the original work in Latin and an English translation. For this study, we use for most of our translation in Portuguese, the English version, but some doubt utterance, statement and figures were made by the original Latin. In this work, we can see that trigonometry is considered as a branch of mathematics which is subordinated to geometry, that is, toward the study of triangles. Regiomontanus provides a large number of theorems as the original trigonometric formula for the area of a triangle. Use algebra to solve geometric problems and mainly shows the first practical theorem for the law of cosines in spherical trigonometry. Thus, this study shows some of the development of the trigonometry in the fifteenth century, especially with regard to concepts such as sine and cosine (sine reverse), the work discussed above, is of paramount importance for the research in the history of mathematics more specifically in the area of historical analysis and critique of literary sources or studying the work of a particular mathematician

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work develops a robustness analysis with respect to the modeling errors, being applied to the strategies of indirect control using Artificial Neural Networks - ANN s, belong to the multilayer feedforward perceptron class with on-line training based on gradient method (backpropagation). The presented schemes are called Indirect Hybrid Control and Indirect Neural Control. They are presented two Robustness Theorems, being one for each proposed indirect control scheme, which allow the computation of the maximum steady-state control error that will occur due to the modeling error what is caused by the neural identifier, either for the closed loop configuration having a conventional controller - Indirect Hybrid Control, or for the closed loop configuration having a neural controller - Indirect Neural Control. Considering that the robustness analysis is restrict only to the steady-state plant behavior, this work also includes a stability analysis transcription that is suitable for multilayer perceptron class of ANN s trained with backpropagation algorithm, to assure the convergence and stability of the used neural systems. By other side, the boundness of the initial transient behavior is assured by the assumption that the plant is BIBO (Bounded Input, Bounded Output) stable. The Robustness Theorems were tested on the proposed indirect control strategies, while applied to regulation control of simulated examples using nonlinear plants, and its results are presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)