37 resultados para Teorema de Bayes
Resumo:
In the Einstein s theory of General Relativity the field equations relate the geometry of space-time with the content of matter and energy, sources of the gravitational field. This content is described by a second order tensor, known as energy-momentum tensor. On the other hand, the energy-momentum tensors that have physical meaning are not specified by this theory. In the 700s, Hawking and Ellis set a couple of conditions, considered feasible from a physical point of view, in order to limit the arbitrariness of these tensors. These conditions, which became known as Hawking-Ellis energy conditions, play important roles in the gravitation scenario. They are widely used as powerful tools for analysis; from the demonstration of important theorems concerning to the behavior of gravitational fields and geometries associated, the gravity quantum behavior, to the analysis of cosmological models. In this dissertation we present a rigorous deduction of the several energy conditions currently in vogue in the scientific literature, such as: the Null Energy Condition (NEC), Weak Energy Condition (WEC), the Strong Energy Condition (SEC), the Dominant Energy Condition (DEC) and Null Dominant Energy Condition (NDEC). Bearing in mind the most trivial applications in Cosmology and Gravitation, the deductions were initially made for an energy-momentum tensor of a generalized perfect fluid and then extended to scalar fields with minimal and non-minimal coupling to the gravitational field. We also present a study about the possible violations of some of these energy conditions. Aiming the study of the single nature of some exact solutions of Einstein s General Relativity, in 1955 the Indian physicist Raychaudhuri derived an equation that is today considered fundamental to the study of the gravitational attraction of matter, which became known as the Raychaudhuri equation. This famous equation is fundamental for to understanding of gravitational attraction in Astrophysics and Cosmology and for the comprehension of the singularity theorems, such as, the Hawking and Penrose theorem about the singularity of the gravitational collapse. In this dissertation we derive the Raychaudhuri equation, the Frobenius theorem and the Focusing theorem for congruences time-like and null congruences of a pseudo-riemannian manifold. We discuss the geometric and physical meaning of this equation, its connections with the energy conditions, and some of its several aplications.
Resumo:
The standard kinetic theory for a nonrelativistic diluted gas is generalized in the spirit of the nonextensive statistic distribution introduced by Tsallis. The new formalism depends on an arbitrary q parameter measuring the degree of nonextensivity. In the limit q = 1, the extensive Maxwell-Boltzmann theory is recovered. Starting from a purely kinetic deduction of the velocity q-distribution function, the Boltzmann H-teorem is generalized for including the possibility of nonextensive out of equilibrium effects. Based on this investigation, it is proved that Tsallis' distribution is the necessary and sufficient condition defining a thermodynamic equilibrium state in the nonextensive context. This result follows naturally from the generalized transport equation and also from the extended H-theorem. Two physical applications of the nonextensive effects have been considered. Closed analytic expressions were obtained for the Doppler broadening of spectral lines from an excited gas, as well as, for the dispersion relations describing the eletrostatic oscillations in a diluted electronic plasma. In the later case, a comparison with the experimental results strongly suggests a Tsallis distribution with the q parameter smaller than unity. A complementary study is related to the thermodynamic behavior of a relativistic imperfect simple fluid. Using nonequilibrium thermodynamics, we show how the basic primary variables, namely: the energy momentum tensor, the particle and entropy fluxes depend on the several dissipative processes present in the fluid. The temperature variation law for this moving imperfect fluid is also obtained, and the Eckart and Landau-Lifshitz formulations are recovered as particular cases
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
The phylogeny is one of the main activities of the modern taxonomists and a way to reconstruct the history of the life through comparative analysis of these sequences stored in their genomes aimed find any justification for the origin or evolution of them. Among the sequences with a high level of conservation are the genes of repair because it is important for the conservation and maintenance of genetic stability. Hence, variations in repair genes, as the genes of the nucleotide excision repair (NER), may indicate a possible gene transfer between species. This study aimed to examine the evolutionary history of the components of the NER. For this, sequences of UVRA, UVRB, UVRC and XPB were obtained from GenBank by Blast-p, considering 10-15 as cutoff to create a database. Phylogenetic studies were done using algorithms in PAUP programs, BAYES and PHYLIP package. Phylogenetic trees were build with protein sequences and with sequences of 16S ribosomal RNA for comparative analysis by the methods of parsimony, likelihood and Bayesian. The XPB tree shows that archaeal´s XPB helicases are similar to eukaryotic helicases. According to this data, we infer that the eukaryote nucleotide excision repair system had appeared in Archaea. At UVRA, UVRB and UVRC trees was found a monophyletic group formed by three species of epsilonproteobacterias class, three species of mollicutes class and archaeabacterias of Methanobacteria and Methanococci classes. This information is supported by a tree obtained with the proteins, UVRA, UVRB and UVRC concatenated. Thus, although there are arguments in the literature defending the horizontal transfer of the system uvrABC of bacteria to archaeabacterias, the analysis made in this study suggests that occurred a vertical transfer, from archaeabacteria, of both the NER genes: uvrABC and XPs. According the parsimony, this is the best way because of the occurrence of monophyletic groups, the time of divergence of classes and number of archaeabacterias species with uvrABC system
Resumo:
In this study, we investigated the role of routes and information attainment for the queenless ant species Dinoponera quadriceps foraging efficiency. Two queenless ant colonies were observed in an area of Atlantic secondary Forest at the FLONA-ICMBio of Nisia Floresta, in the state of Rio Grande do Norte, northeastern Brazil, at least once a week. In the first stage of the study, we observed the workers, from leaving until returning to the colony. In the second stage, we introduced a acrylic plate (100 x 30 x 0,8 cm) on a selected entrance of the nest early in the morning before the ants left the nest. All behavioral recordings were done through focal time and all occurence samplings. The recording windows were of 15 minutes with 1 minute interval, and 5 minute intervals between each observation window. Foraging was the main activity when the workers were outside the nest. There was a positive correlation between time outside the nest and distance travelled by the ants. These variables influenced the proportion of resource that was taken to the nest, that is, the bigger its proportion, the longer the time outside and distance travelled during the search. That proportion also influenced the time the worker remained in the nest before a new trip, the bigger the proportion of the item, the shorter was the time in the nest. During all the study, workers showed fidelity to the route and to the sectors in the home range, even when the screen was in the ant´s way, once they deviated and kept the route. The features of foraging concerning time, distance, route and flexibility to go astray by the workers indicate that decisions are made by each individual and are optimal in terms of a cost-benefit relation. The strategy chosen by queenless ants fits the central place foraging and marginal value theorem theories and demonstrate its flexibility to new informations. This indicates that the workers can learn new environmental landmarks to guide their routes
Resumo:
O método de combinação de Nelson-Oppen permite que vários procedimentos de decisão, cada um projetado para uma teoria específica, possam ser combinados para inferir sobre teorias mais abrangentes, através do princípio de propagação de igualdades. Provadores de teorema baseados neste modelo são beneficiados por sua característica modular e podem evoluir mais facilmente, incrementalmente. Difference logic é uma subteoria da aritmética linear. Ela é formada por constraints do tipo x − y ≤ c, onde x e y são variáveis e c é uma constante. Difference logic é muito comum em vários problemas, como circuitos digitais, agendamento, sistemas temporais, etc. e se apresenta predominante em vários outros casos. Difference logic ainda se caracteriza por ser modelada usando teoria dos grafos. Isto permite que vários algoritmos eficientes e conhecidos da teoria de grafos possam ser utilizados. Um procedimento de decisão para difference logic é capaz de induzir sobre milhares de constraints. Um procedimento de decisão para a teoria de difference logic tem como objetivo principal informar se um conjunto de constraints de difference logic é satisfatível (as variáveis podem assumir valores que tornam o conjunto consistente) ou não. Além disso, para funcionar em um modelo de combinação baseado em Nelson-Oppen, o procedimento de decisão precisa ter outras funcionalidades, como geração de igualdade de variáveis, prova de inconsistência, premissas, etc. Este trabalho apresenta um procedimento de decisão para a teoria de difference logic dentro de uma arquitetura baseada no método de combinação de Nelson-Oppen. O trabalho foi realizado integrando-se ao provador haRVey, de onde foi possível observar o seu funcionamento. Detalhes de implementação e testes experimentais são relatados
Resumo:
The objective of the researches in artificial intelligence is to qualify the computer to execute functions that are performed by humans using knowledge and reasoning. This work was developed in the area of machine learning, that it s the study branch of artificial intelligence, being related to the project and development of algorithms and techniques capable to allow the computational learning. The objective of this work is analyzing a feature selection method for ensemble systems. The proposed method is inserted into the filter approach of feature selection method, it s using the variance and Spearman correlation to rank the feature and using the reward and punishment strategies to measure the feature importance for the identification of the classes. For each ensemble, several different configuration were used, which varied from hybrid (homogeneous) to non-hybrid (heterogeneous) structures of ensemble. They were submitted to five combining methods (voting, sum, sum weight, multiLayer Perceptron and naïve Bayes) which were applied in six distinct database (real and artificial). The classifiers applied during the experiments were k- nearest neighbor, multiLayer Perceptron, naïve Bayes and decision tree. Finally, the performance of ensemble was analyzed comparatively, using none feature selection method, using a filter approach (original) feature selection method and the proposed method. To do this comparison, a statistical test was applied, which demonstrate that there was a significant improvement in the precision of the ensembles
Resumo:
The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras
Resumo:
The widespread growth in the use of smart cards (by banks, transport services, and cell phones, etc) has brought an important fact that must be addressed: the need of tools that can be used to verify such cards, so to guarantee the correctness of their software. As the vast majority of cards that are being developed nowadays use the JavaCard technology as they software layer, the use of the Java Modeling Language (JML) to specify their programs appear as a natural solution. JML is a formal language tailored to Java. It has been inspired by methodologies from Larch and Eiffel, and has been widely adopted as the de facto language when dealing with specification of any Java related program. Various tools that make use of JML have already been developed, covering a wide range of functionalities, such as run time and static checking. But the tools existent so far for static checking are not fully automated, and, those that are, do not offer an adequate level of soundness and completeness. Our objective is to contribute to a series of techniques, that can be used to accomplish a fully automated and confident verification of JavaCard applets. In this work we present the first steps to this. With the use of a software platform comprised by Krakatoa, Why and haRVey, we developed a set of techniques to reduce the size of the theory necessary to verify the specifications. Such techniques have yielded very good results, with gains of almost 100% in all tested cases, and has proved as a valuable technique to be used, not only in this, but in most real world problems related to automatic verification
Resumo:
In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality
Resumo:
Considering a non-relativistic ideal gas, the standard foundations of kinetic theory are investigated in the context of non-gaussian statistical mechanics introduced by Kaniadakis. The new formalism is based on the generalization of the Boltzmann H-theorem and the deduction of Maxwells statistical distribution. The calculated power law distribution is parameterized through a parameter measuring the degree of non-gaussianity. In the limit = 0, the theory of gaussian Maxwell-Boltzmann distribution is recovered. Two physical applications of the non-gaussian effects have been considered. The first one, the -Doppler broadening of spectral lines from an excited gas is obtained from analytical expressions. The second one, a mathematical relationship between the entropic index and the stellar polytropic index is shown by using the thermodynamic formulation for self-gravitational systems
Resumo:
Considering a quantum gas, the foundations of standard thermostatistics are investigated in the context of non-Gaussian statistical mechanics introduced by Tsallis and Kaniadakis. The new formalism is based on the following generalizations: i) Maxwell- Boltzmann-Gibbs entropy and ii) deduction of H-theorem. Based on this investigation, we calculate a new entropy using a generalization of combinatorial analysis based on two different methods of counting. The basic ingredients used in the H-theorem were: a generalized quantum entropy and a generalization of collisional term of Boltzmann equation. The power law distributions are parameterized by parameters q;, measuring the degree of non-Gaussianity of quantum gas. In the limit q
Resumo:
We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.
Resumo:
In this paper we analyze the Euler Relation generally using as a means to visualize the fundamental idea presented manipulation of concrete materials, so that there is greater ease of understanding of the content, expanding learning for secondary students and even fundamental. The study is an introduction to the topic and leads the reader to understand that the notorious Euler Relation if inadequately presented, is not sufficient to establish the existence of a polyhedron. For analyzing some examples, the text inserts the idea of doubt, showing cases where it is not fit enough numbers to validate the Euler Relation. The research also highlights a theorem certainly unfamiliar to many students and teachers to research the polyhedra, presenting some very simple inequalities relating the amounts of edges, vertices and faces of any convex polyhedron, which clearly specifies the conditions and sufficient necessary for us to see, without the need of viewing the existence of the solid screen. And so we can see various polyhedra and facilitate understanding of what we are exposed, we will use Geogebra, dynamic application that combines mathematical concepts of algebra and geometry and can be found through the link http://www.geogebra.org
Resumo:
Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb’s theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb’s view that neuronal assemblies correspond to primitive building blocks of representation, nearly unchanged in 10 the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition