30 resultados para Teorema de Gauss Bonnet
Resumo:
This present research the aim to show to the reader the Geometry non-Euclidean while anomaly indicating the pedagogical implications and then propose a sequence of activities, divided into three blocks which show the relationship of Euclidean geometry with non-Euclidean, taking the Euclidean with respect to analysis of the anomaly in non-Euclidean. PPGECNM is tied to the line of research of History, Philosophy and Sociology of Science in the Teaching of Natural Sciences and Mathematics. Treat so on Euclid of Alexandria, his most famous work The Elements and moreover, emphasize the Fifth Postulate of Euclid, particularly the difficulties (which lasted several centuries) that mathematicians have to understand him. Until the eighteenth century, three mathematicians: Lobachevsky (1793 - 1856), Bolyai (1775 - 1856) and Gauss (1777-1855) was convinced that this axiom was correct and that there was another geometry (anomalous) as consistent as the Euclid, but that did not adapt into their parameters. It is attributed to the emergence of these three non-Euclidean geometry. For the course methodology we started with some bibliographical definitions about anomalies, after we ve featured so that our definition are better understood by the readers and then only deal geometries non-Euclidean (Hyperbolic Geometry, Spherical Geometry and Taxicab Geometry) confronting them with the Euclidean to analyze the anomalies existing in non-Euclidean geometries and observe its importance to the teaching. After this characterization follows the empirical part of the proposal which consisted the application of three blocks of activities in search of pedagogical implications of anomaly. The first on parallel lines, the second on study of triangles and the third on the shortest distance between two points. These blocks offer a work with basic elements of geometry from a historical and investigative study of geometries non-Euclidean while anomaly so the concept is understood along with it s properties without necessarily be linked to the image of the geometric elements and thus expanding or adapting to other references. For example, the block applied on the second day of activities that provides extend the result of the sum of the internal angles of any triangle, to realize that is not always 180° (only when Euclid is a reference that this conclusion can be drawn)
Resumo:
In the Einstein s theory of General Relativity the field equations relate the geometry of space-time with the content of matter and energy, sources of the gravitational field. This content is described by a second order tensor, known as energy-momentum tensor. On the other hand, the energy-momentum tensors that have physical meaning are not specified by this theory. In the 700s, Hawking and Ellis set a couple of conditions, considered feasible from a physical point of view, in order to limit the arbitrariness of these tensors. These conditions, which became known as Hawking-Ellis energy conditions, play important roles in the gravitation scenario. They are widely used as powerful tools for analysis; from the demonstration of important theorems concerning to the behavior of gravitational fields and geometries associated, the gravity quantum behavior, to the analysis of cosmological models. In this dissertation we present a rigorous deduction of the several energy conditions currently in vogue in the scientific literature, such as: the Null Energy Condition (NEC), Weak Energy Condition (WEC), the Strong Energy Condition (SEC), the Dominant Energy Condition (DEC) and Null Dominant Energy Condition (NDEC). Bearing in mind the most trivial applications in Cosmology and Gravitation, the deductions were initially made for an energy-momentum tensor of a generalized perfect fluid and then extended to scalar fields with minimal and non-minimal coupling to the gravitational field. We also present a study about the possible violations of some of these energy conditions. Aiming the study of the single nature of some exact solutions of Einstein s General Relativity, in 1955 the Indian physicist Raychaudhuri derived an equation that is today considered fundamental to the study of the gravitational attraction of matter, which became known as the Raychaudhuri equation. This famous equation is fundamental for to understanding of gravitational attraction in Astrophysics and Cosmology and for the comprehension of the singularity theorems, such as, the Hawking and Penrose theorem about the singularity of the gravitational collapse. In this dissertation we derive the Raychaudhuri equation, the Frobenius theorem and the Focusing theorem for congruences time-like and null congruences of a pseudo-riemannian manifold. We discuss the geometric and physical meaning of this equation, its connections with the energy conditions, and some of its several aplications.
Resumo:
The standard kinetic theory for a nonrelativistic diluted gas is generalized in the spirit of the nonextensive statistic distribution introduced by Tsallis. The new formalism depends on an arbitrary q parameter measuring the degree of nonextensivity. In the limit q = 1, the extensive Maxwell-Boltzmann theory is recovered. Starting from a purely kinetic deduction of the velocity q-distribution function, the Boltzmann H-teorem is generalized for including the possibility of nonextensive out of equilibrium effects. Based on this investigation, it is proved that Tsallis' distribution is the necessary and sufficient condition defining a thermodynamic equilibrium state in the nonextensive context. This result follows naturally from the generalized transport equation and also from the extended H-theorem. Two physical applications of the nonextensive effects have been considered. Closed analytic expressions were obtained for the Doppler broadening of spectral lines from an excited gas, as well as, for the dispersion relations describing the eletrostatic oscillations in a diluted electronic plasma. In the later case, a comparison with the experimental results strongly suggests a Tsallis distribution with the q parameter smaller than unity. A complementary study is related to the thermodynamic behavior of a relativistic imperfect simple fluid. Using nonequilibrium thermodynamics, we show how the basic primary variables, namely: the energy momentum tensor, the particle and entropy fluxes depend on the several dissipative processes present in the fluid. The temperature variation law for this moving imperfect fluid is also obtained, and the Eckart and Landau-Lifshitz formulations are recovered as particular cases
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
We presented in this work two methods of estimation for accelerated failure time models with random e_ects to process grouped survival data. The _rst method, which is implemented in software SAS, by NLMIXED procedure, uses an adapted Gauss-Hermite quadrature to determine marginalized likelihood. The second method, implemented in the free software R, is based on the method of penalized likelihood to estimate the parameters of the model. In the _rst case we describe the main theoretical aspects and, in the second, we briey presented the approach adopted with a simulation study to investigate the performance of the method. We realized implement the models using actual data on the time of operation of oil wells from the Potiguar Basin (RN / CE).
Resumo:
In this study, we investigated the role of routes and information attainment for the queenless ant species Dinoponera quadriceps foraging efficiency. Two queenless ant colonies were observed in an area of Atlantic secondary Forest at the FLONA-ICMBio of Nisia Floresta, in the state of Rio Grande do Norte, northeastern Brazil, at least once a week. In the first stage of the study, we observed the workers, from leaving until returning to the colony. In the second stage, we introduced a acrylic plate (100 x 30 x 0,8 cm) on a selected entrance of the nest early in the morning before the ants left the nest. All behavioral recordings were done through focal time and all occurence samplings. The recording windows were of 15 minutes with 1 minute interval, and 5 minute intervals between each observation window. Foraging was the main activity when the workers were outside the nest. There was a positive correlation between time outside the nest and distance travelled by the ants. These variables influenced the proportion of resource that was taken to the nest, that is, the bigger its proportion, the longer the time outside and distance travelled during the search. That proportion also influenced the time the worker remained in the nest before a new trip, the bigger the proportion of the item, the shorter was the time in the nest. During all the study, workers showed fidelity to the route and to the sectors in the home range, even when the screen was in the ant´s way, once they deviated and kept the route. The features of foraging concerning time, distance, route and flexibility to go astray by the workers indicate that decisions are made by each individual and are optimal in terms of a cost-benefit relation. The strategy chosen by queenless ants fits the central place foraging and marginal value theorem theories and demonstrate its flexibility to new informations. This indicates that the workers can learn new environmental landmarks to guide their routes
Resumo:
O método de combinação de Nelson-Oppen permite que vários procedimentos de decisão, cada um projetado para uma teoria específica, possam ser combinados para inferir sobre teorias mais abrangentes, através do princípio de propagação de igualdades. Provadores de teorema baseados neste modelo são beneficiados por sua característica modular e podem evoluir mais facilmente, incrementalmente. Difference logic é uma subteoria da aritmética linear. Ela é formada por constraints do tipo x − y ≤ c, onde x e y são variáveis e c é uma constante. Difference logic é muito comum em vários problemas, como circuitos digitais, agendamento, sistemas temporais, etc. e se apresenta predominante em vários outros casos. Difference logic ainda se caracteriza por ser modelada usando teoria dos grafos. Isto permite que vários algoritmos eficientes e conhecidos da teoria de grafos possam ser utilizados. Um procedimento de decisão para difference logic é capaz de induzir sobre milhares de constraints. Um procedimento de decisão para a teoria de difference logic tem como objetivo principal informar se um conjunto de constraints de difference logic é satisfatível (as variáveis podem assumir valores que tornam o conjunto consistente) ou não. Além disso, para funcionar em um modelo de combinação baseado em Nelson-Oppen, o procedimento de decisão precisa ter outras funcionalidades, como geração de igualdade de variáveis, prova de inconsistência, premissas, etc. Este trabalho apresenta um procedimento de decisão para a teoria de difference logic dentro de uma arquitetura baseada no método de combinação de Nelson-Oppen. O trabalho foi realizado integrando-se ao provador haRVey, de onde foi possível observar o seu funcionamento. Detalhes de implementação e testes experimentais são relatados
Resumo:
The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras
Resumo:
The widespread growth in the use of smart cards (by banks, transport services, and cell phones, etc) has brought an important fact that must be addressed: the need of tools that can be used to verify such cards, so to guarantee the correctness of their software. As the vast majority of cards that are being developed nowadays use the JavaCard technology as they software layer, the use of the Java Modeling Language (JML) to specify their programs appear as a natural solution. JML is a formal language tailored to Java. It has been inspired by methodologies from Larch and Eiffel, and has been widely adopted as the de facto language when dealing with specification of any Java related program. Various tools that make use of JML have already been developed, covering a wide range of functionalities, such as run time and static checking. But the tools existent so far for static checking are not fully automated, and, those that are, do not offer an adequate level of soundness and completeness. Our objective is to contribute to a series of techniques, that can be used to accomplish a fully automated and confident verification of JavaCard applets. In this work we present the first steps to this. With the use of a software platform comprised by Krakatoa, Why and haRVey, we developed a set of techniques to reduce the size of the theory necessary to verify the specifications. Such techniques have yielded very good results, with gains of almost 100% in all tested cases, and has proved as a valuable technique to be used, not only in this, but in most real world problems related to automatic verification
Resumo:
In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality
Resumo:
Considering a non-relativistic ideal gas, the standard foundations of kinetic theory are investigated in the context of non-gaussian statistical mechanics introduced by Kaniadakis. The new formalism is based on the generalization of the Boltzmann H-theorem and the deduction of Maxwells statistical distribution. The calculated power law distribution is parameterized through a parameter measuring the degree of non-gaussianity. In the limit = 0, the theory of gaussian Maxwell-Boltzmann distribution is recovered. Two physical applications of the non-gaussian effects have been considered. The first one, the -Doppler broadening of spectral lines from an excited gas is obtained from analytical expressions. The second one, a mathematical relationship between the entropic index and the stellar polytropic index is shown by using the thermodynamic formulation for self-gravitational systems
Resumo:
Considering a quantum gas, the foundations of standard thermostatistics are investigated in the context of non-Gaussian statistical mechanics introduced by Tsallis and Kaniadakis. The new formalism is based on the following generalizations: i) Maxwell- Boltzmann-Gibbs entropy and ii) deduction of H-theorem. Based on this investigation, we calculate a new entropy using a generalization of combinatorial analysis based on two different methods of counting. The basic ingredients used in the H-theorem were: a generalized quantum entropy and a generalization of collisional term of Boltzmann equation. The power law distributions are parameterized by parameters q;, measuring the degree of non-Gaussianity of quantum gas. In the limit q
Resumo:
Survival models deals with the modeling of time to event data. However in some situations part of the population may be no longer subject to the event. Models that take this fact into account are called cure rate models. There are few studies about hypothesis tests in cure rate models. Recently a new test statistic, the gradient statistic, has been proposed. It shares the same asymptotic properties with the classic large sample tests, the likelihood ratio, score and Wald tests. Some simulation studies have been carried out to explore the behavior of the gradient statistic in fi nite samples and compare it with the classic statistics in diff erent models. The main objective of this work is to study and compare the performance of gradient test and likelihood ratio test in cure rate models. We first describe the models and present the main asymptotic properties of the tests. We perform a simulation study based on the promotion time model with Weibull distribution to assess the performance of the tests in finite samples. An application is presented to illustrate the studied concepts
Resumo:
In this paper we analyze the Euler Relation generally using as a means to visualize the fundamental idea presented manipulation of concrete materials, so that there is greater ease of understanding of the content, expanding learning for secondary students and even fundamental. The study is an introduction to the topic and leads the reader to understand that the notorious Euler Relation if inadequately presented, is not sufficient to establish the existence of a polyhedron. For analyzing some examples, the text inserts the idea of doubt, showing cases where it is not fit enough numbers to validate the Euler Relation. The research also highlights a theorem certainly unfamiliar to many students and teachers to research the polyhedra, presenting some very simple inequalities relating the amounts of edges, vertices and faces of any convex polyhedron, which clearly specifies the conditions and sufficient necessary for us to see, without the need of viewing the existence of the solid screen. And so we can see various polyhedra and facilitate understanding of what we are exposed, we will use Geogebra, dynamic application that combines mathematical concepts of algebra and geometry and can be found through the link http://www.geogebra.org
Resumo:
Acid rain is a major assault on the environment, a consequence of burning fossil fuels and industrial pollutants the basis of sulfur dioxide released into the atmosphere. The objective of this research was to monitor and analyze changes in water quality of rain in the city of Natal, seeking to investigate the influence of quality on a local, regional and global, in addition to possible effects of this quality in the local landscape. Data collection was performed from December 2005 to December 2007. We used techniques of nefanálise in identifying systems sinóticos, field research in the search for possible effects of acid rain on the landscape, and collect and analyze data of precipitation and its degree of acidity. Used descriptive statistics (standard deviation and coefficient of variation) used to monitor the behavior of chemical precipitation, and monitoring of errors in measurements of pH, level of confidence, Normalized distribution of Gauss, confidence intervals, analysis of variance ANOVA were also used. Main results presented as a variation of pH between 5,021 and 6,836, with an average standard deviation of 5,958 and 0,402, showing that the average may represent the sample. Thus, we can infer that, according to the CONAMA Resolution 357 (the index for fresh water acidity should be between 6.0 and 9.0), the precipitation of Natal / RN is slightly acidic. It appears that the intertropical convergence zone figures showed the most acidic among the systems analyzed sinóticos, taking its average value of pH of 5,617, which means an acid value now, with a standard deviation of 0,235 and the coefficient of variation of 4,183% which shows that the average may represent the sample. Already in field research and found several places that suffer strongly the action of acid rain. However, the results are original and need further investigation, including the use of new methodologies