1000 resultados para Função densidade de probabilidade - pdf
Resumo:
Considering that osteopenia and osteoporosis are diabetes mellitus complications, and that tamoxifen (TAM) is an anti-estrogenic drug used in breast cancer treatment, this drug may have a beneficial action preventing accentuaded bone loss associated to diabetes. Female Wistar rats (n=60) weighting 180-250g were divided in four groups: Group C, control animals (n=5); Group T, animals treated with TAM (n=5); Group D, diabetic animals (n=5); and Group DT, diabetic animals treated with TAM (n=5). Oestrus cycle was evaluated before the beggining of experimental period to select the animals with regular cycle. This evaluation continued throughout the study period and for all studied groups. Diabetes was induced by a intra perithoneal injection of streptozotocin (STZ) in a concentration of 45 mg/Kg of body weight. Those animals with serum glicose levels 250 mg/dL were considered diabetics. Animals were sacrificed in the periods of 30, 60 and 90 days after diabetes onset. Left femur histomorphometric measurements and serum biochemical analysis (glycemia, alkaline phosphatase, tartaric-resistant acid phosphatase, calcium, phosphorous, magnesium, total proteins, albumin, globulins, urea and creatinine) were done. Histomorphometric results showed a progressive bone loss in Group D animals when compared to those from Group C all over the experimental period, becoming accentuaded in the 90 days period. In relation to Groups T and DT, values approcimated to those obtained for control group were found during the whole period of study. Those data may indicate a bone mass recovery or a diminished bone loss due to diabetes when animals were treated with TAM. During the whole experimental period animals of groups D and DT maintained glycemic levels above 250 mg/dL whereas animals of groups C and T maintained those levels below 150mg/dL. Alkaline phosphatase activity was increased in all study periods for groups D and DT when compared to group C animals over the 90 days period. Tartarate-resistant acid phosphatase activity was showed unaltered in all periods of study and for all groups. Calcium and magnesium results were also unaltered, maintaining reference levels for all groups in all experimental periods. Phosphorous levels were increased in groups D and DT when compared to groups C and T in the 30 days period. However no difference was found in the periods of 60 and 90 days for this test. No difference was found for total proteins levels for groups C, T, D and DT over the study period. Albumin levels were reduced in DT group in the 60 days period and in D and DT groups in the 90 days period. Urea levels were significantly increased in the 30, 60 and 90 days study periods in groups D and DT when compared to groups C and T. Creatinine results showed a significantly increase in the 90 days period for groups D and DT when compared to groups C and T, and maintaining unaltered in the 30 and 60 days periods. These results suggest that the treatment with TAM may reduce bone loss caused by diabetes mellitus
Resumo:
Difusive processes are extremely common in Nature. Many complex systems, such as microbial colonies, colloidal aggregates, difusion of fluids, and migration of populations, involve a large number of similar units that form fractal structures. A new model of difusive agregation was proposed recently by Filoche and Sapoval [68]. Based on their work, we develop a model called Difusion with Aggregation and Spontaneous Reorganization . This model consists of a set of particles with excluded volume interactions, which perform random walks on a square lattice. Initially, the lattice is occupied with a density p = N/L2 of particles occupying distinct, randomly chosen positions. One of the particles is selected at random as the active particle. This particle executes a random walk until it visits a site occupied by another particle, j. When this happens, the active particle is rejected back to its previous position (neighboring particle j), and a new active particle is selected at random from the set of N particles. Following an initial transient, the system attains a stationary regime. In this work we study the stationary regime, focusing on scaling properties of the particle distribution, as characterized by the pair correlation function ø(r). The latter is calculated by averaging over a long sequence of configurations generated in the stationary regime, using systems of size 50, 75, 100, 150, . . . , 700. The pair correlation function exhibits distinct behaviors in three diferent density ranges, which we term subcritical, critical, and supercritical. We show that in the subcritical regime, the particle distribution is characterized by a fractal dimension. We also analyze the decay of temporal correlations
Resumo:
In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The objective of this dissertation is the development of a general formalism to analyze the thermodynamical properties of a photon gas under the context of nonlinear electrodynamics (NLED). To this end it is obtained, through the systematic analysis of Maxwell s electromagnetism (EM) properties, the general dependence of the Lagrangian that describes this kind of theories. From this Lagrangian and in the background of classical field theory, we derive the general dispersion relation that photons must obey in terms of a background field and the NLED properties. It is important to note that, in order to achieve this result, an aproximation has been made in order to allow the separation of the total electromagnetic field into a strong background electromagnetic field and a perturbation. Once the dispersion relation is in hand, the usual Bose-Einstein statistical procedure is followed through which the thermodynamical properties, energy density and pressure relations are obtained. An important result of this work is the fact that equation of state remains identical to the one obtained under EM. Then, two examples are made where the thermodynamic properties are explicitly derived in the context of two NLED, Born-Infelds and a quadratic approximation. The choice of the first one is due to the vast appearance in literature and, the second one, because it is a first order approximation of a large class of NLED. Ultimately, both are chosen because of their simplicity. Finally, the results are compared to EM and interpreted, suggesting possible tests to verify the internal consistency of NLED and motivating further developement into the formalism s quantum case
Resumo:
We use a tight-binding formulation to investigate the transmissivity and the currentvoltage (I_V) characteristics of sequences of double-strand DNA molecules. In order to reveal the relevance of the underlying correlations in the nucleotides distribution, we compare theresults for the genomic DNA sequence with those of arti_cial sequences (the long-range correlated Fibonacci and RudinShapiro one) and a random sequence, which is a kind of prototype of a short-range correlated system. The random sequence is presented here with the same _rst neighbors pair correlations of the human DNA sequence. We found that the long-range character of the correlations is important to the transmissivity spectra, although the I_V curves seem to be mostly inuenced by the short-range correlations. We also analyze in this work the electronic and thermal properties along an _-helix sequence obtained from an _3 peptide which has the uni-dimensional sequence (Leu-Glu-Thr- Leu-Ala-Lys-Ala)3. An ab initio quantum chemical calculation procedure is used to obtain the highest occupied molecular orbital (HOMO) as well as their charge transfer integrals, when the _-helix sequence forms two di_erent variants with (the so-called 5Q variant) and without (the 7Q variant) _brous assemblies that can be observed by transmission electron microscopy. The di_erence between the two structures is that the 5Q (7Q) structure have Ala ! Gln substitution at the 5th (7th) position, respectively. We estimate theoretically the density of states as well as the electronic transmission spectra for the peptides using a tight-binding Hamiltonian model together with the Dyson's equation. Besides, we solve the time dependent Schrodinger equation to compute the spread of an initially localized wave-packet. We also compute the localization length in the _nite _-helix segment and the quantum especi_c heat. Keeping in mind that _brous protein can be associated with diseases, the important di_erences observed in the present vi electronic transport studies encourage us to suggest this method as a molecular diagnostic tool
Resumo:
In this work we present a study for the structural, electronic and optical properties, at ambient conditions of SrSnO3, SrxBa1
Resumo:
In this work we studied the consistency for a class of kernel estimates of f f (.) in the Markov chains with general state space E C Rd case. This study is divided into two parts: In the first one f (.) is a stationary density of the chain, and in the second one f (x) v (dx) is the limit distribution of a geometrically ergodic chain
Resumo:
Os Algoritmos Genético (AG) e o Simulated Annealing (SA) são algoritmos construídos para encontrar máximo ou mínimo de uma função que representa alguma característica do processo que está sendo modelado. Esses algoritmos possuem mecanismos que os fazem escapar de ótimos locais, entretanto, a evolução desses algoritmos no tempo se dá de forma completamente diferente. O SA no seu processo de busca trabalha com apenas um ponto, gerando a partir deste sempre um nova solução que é testada e que pode ser aceita ou não, já o AG trabalha com um conjunto de pontos, chamado população, da qual gera outra população que sempre é aceita. Em comum com esses dois algoritmos temos que a forma como o próximo ponto ou a próxima população é gerada obedece propriedades estocásticas. Nesse trabalho mostramos que a teoria matemática que descreve a evolução destes algoritmos é a teoria das cadeias de Markov. O AG é descrito por uma cadeia de Markov homogênea enquanto que o SA é descrito por uma cadeia de Markov não-homogênea, por fim serão feitos alguns exemplos computacionais comparando o desempenho desses dois algoritmos
Resumo:
Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data
Resumo:
In this work we present a mathematical and computational modeling of electrokinetic phenomena in electrically charged porous medium. We consider the porous medium composed of three different scales (nanoscopic, microscopic and macroscopic). On the microscopic scale the domain is composed by a porous matrix and a solid phase. The pores are filled with an aqueous phase consisting of ionic solutes fully diluted, and the solid matrix consists of electrically charged particles. Initially we present the mathematical model that governs the electrical double layer in order to quantify the electric potential, electric charge density, ion adsorption and chemical adsorption in nanoscopic scale. Then, we derive the microscopic model, where the adsorption of ions due to the electric double layer and the reactions of protonation/ deprotanaç~ao and zeta potential obtained in modeling nanoscopic arise in microscopic scale through interface conditions in the problem of Stokes and Nerst-Planck equations respectively governing the movement of the aqueous solution and transport of ions. We developed the process of upscaling the problem nano/microscopic using the homogenization technique of periodic structures by deducing the macroscopic model with their respectives cell problems for effective parameters of the macroscopic equations. Considering a clayey porous medium consisting of kaolinite clay plates distributed parallel, we rewrite the macroscopic model in a one-dimensional version. Finally, using a sequential algorithm, we discretize the macroscopic model via the finite element method, along with the interactive method of Picard for the nonlinear terms. Numerical simulations on transient regime with variable pH in one-dimensional case are obtained, aiming computational modeling of the electroremediation process of clay soils contaminated
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
We present a dependent risk model to describe the surplus of an insurance portfolio, based on the article "A ruin model with dependence between claim sizes and claim intervals"(Albrecher and Boxma [1]). An exact expression for the Laplace transform of the survival function of the surplus is derived. The results obtained are illustrated by several numerical examples and the case when we ignore the dependence structure present in the model is investigated. For the phase type claim sizes, we study by the survival probability, considering this is a class of distributions computationally tractable and more general
Resumo:
In general, the study of quadratic functions is based on an excessive amount formulas, all content is approached without justification. Here is the quadratic function and its properties from problems involving quadratic equations and the technique of completing the square. Based on the definitions we will show that the graph of the quadratic function is the parabola and finished our studies finding that several properties of the function can be read from the simple observation of your chart. Thus, we built the whole matter justifying each step, abandoning the use of decorated formulas and valuing the reasoning
Resumo:
This thesis aims to show teachers and students in teaching and learning in a study of Probability High School, a subject that sharpens the perception and understanding of the phenomea of the random nature that surrounds us. The same aims do with people who are involved in this process understand basic ideas of probability and, when necessary, apply them in the real world. We seek to draw a matched between intuition and rigor and hope therebyto contribute to the work of the teacher in the classroom and the learning process of students, consolidating, deepening and expaning what they have learned in previous contents