116 resultados para Duffing, Equações de
Resumo:
The so-called gravitomagnetic field arised as an old conjecture that currents of matter (no charges) would produce gravitational effects similar to those produced by electric currents in electromagnetism. Hans Thirring in 1918, using the weak field approximation to the Einsteins field equations, deduced that a slowly rotating massive shell drags the inertial frames in the direction of its rotation. In the same year, Joseph Lense applied to astronomy the calculations of Thirring. Later, that effect came to be known as the Lense- Thirring effect. Along with the de Sitter effect, those phenomena were recently tested by a gyroscope in orbit around the Earth, as proposed by George E. Pugh in 1959 and Leonard I. Schiff in 1960. In this dissertation, we study the gravitational effects associated with the rotation of massive bodies in the light of the Einsteins General Theory of Relativity. With that finality, we develop the weak field approximation to General Relativity and obtain the various associated gravitational effects: gravitomagnetic time-delay, de Sitter effect (geodesic precession) and the Lense-Thirring effect (drag of inertial frames). We discus the measures of the Lense-Thirring effect done by LAGEOS Satellite (Laser Geodynamics Satellite) and the Gravity Probe B - GPB - mission. The GPB satellite was launched into orbit around the Earth at an altitude of 642 km by NASA in 2004. Results presented in May 2011 clearly show the existence of the Lense-Thirring effect- a drag of inertial frames of 37:2 7:2 mas/year (mas = milliarcsec)- and de Sitter effect - a geodesic precession of 6; 601:8 18:3 mas/year- measured with an accuracy of 19 % and of 0.28 % respectively (1 mas = 4:84810��9 radian). These results are in a good agreement with the General Relativity predictions of 41 mas/year for the Lense-Thirring effect and 6,606.1 mas/year for the de Sitter effect.
Resumo:
The study of solar neutrinos is very important to a better comprehension of the set of nuclear reactions that occurs inside the Sun and in solar type stars. The ux of neutrinos provides a better comprehension of the stellar structure as a whole. In this dissertation we study the ux of neutrinos in a solar model, addressing the neutrino oscillation, analyzing with the intention of determining and verify the distribution from a statistical point of view, since this ux depends on the particles intrinsic velocity distributions in stellar plasma. The main tool for this analysis was the Toulouse-Geneva Stellar Evolution Code, or TGEC, which allow us to obtain the neutrino ux values per reaction and per layer inside the Sun, allowing us to compare the observational results for the neutrino ux detected on experiments based on Cl37 (Homestake), Ga71 (SAGE, Gallex/GNO) and water (SNO). Our results show the nal distribution for neutrino ux as a function of the depth using the coordinates of mass and radius. The dissertation also shows that the equations for this ux are present in TGEC.
Resumo:
In this work we study a new risk model for a firm which is sensitive to its credit quality, proposed by Yang(2003): Are obtained recursive equations for finite time ruin probability and distribution of ruin time and Volterra type integral equation systems for ultimate ruin probability, severity of ruin and distribution of surplus before and after ruin
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
In the work reported here we present theoretical and numerical results about a Risk Model with Interest Rate and Proportional Reinsurance based on the article Inequalities for the ruin probability in a controlled discrete-time risk process by Ros ario Romera and Maikol Diasparra (see [5]). Recursive and integral equations as well as upper bounds for the Ruin Probability are given considering three di erent approaches, namely, classical Lundberg inequality, Inductive approach and Martingale approach. Density estimation techniques (non-parametrics) are used to derive upper bounds for the Ruin Probability and the algorithms used in the simulation are presented
Resumo:
In this work we present a mathematical and computational modeling of electrokinetic phenomena in electrically charged porous medium. We consider the porous medium composed of three different scales (nanoscopic, microscopic and macroscopic). On the microscopic scale the domain is composed by a porous matrix and a solid phase. The pores are filled with an aqueous phase consisting of ionic solutes fully diluted, and the solid matrix consists of electrically charged particles. Initially we present the mathematical model that governs the electrical double layer in order to quantify the electric potential, electric charge density, ion adsorption and chemical adsorption in nanoscopic scale. Then, we derive the microscopic model, where the adsorption of ions due to the electric double layer and the reactions of protonation/ deprotanaç~ao and zeta potential obtained in modeling nanoscopic arise in microscopic scale through interface conditions in the problem of Stokes and Nerst-Planck equations respectively governing the movement of the aqueous solution and transport of ions. We developed the process of upscaling the problem nano/microscopic using the homogenization technique of periodic structures by deducing the macroscopic model with their respectives cell problems for effective parameters of the macroscopic equations. Considering a clayey porous medium consisting of kaolinite clay plates distributed parallel, we rewrite the macroscopic model in a one-dimensional version. Finally, using a sequential algorithm, we discretize the macroscopic model via the finite element method, along with the interactive method of Picard for the nonlinear terms. Numerical simulations on transient regime with variable pH in one-dimensional case are obtained, aiming computational modeling of the electroremediation process of clay soils contaminated
Resumo:
This paper has two objectives: (i) conducting a literature search on the criteria of uniqueness of solution for initial value problems of ordinary differential equations. (ii) a modification of the method of Euler that seems to be able to converge to a solution of the problem, if the solution is not unique
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
In general, the study of quadratic functions is based on an excessive amount formulas, all content is approached without justification. Here is the quadratic function and its properties from problems involving quadratic equations and the technique of completing the square. Based on the definitions we will show that the graph of the quadratic function is the parabola and finished our studies finding that several properties of the function can be read from the simple observation of your chart. Thus, we built the whole matter justifying each step, abandoning the use of decorated formulas and valuing the reasoning
Resumo:
In this work we studied the method to solving linear equations system, presented in the book titled "The nine chapters on the mathematical art", which was written in the first century of this era. This work has the intent of showing how the mathematics history can be used to motivate the introduction of some topics in high school. Through observations of patterns which repeats itself in the presented method, we were able to introduce, in a very natural way, the concept of linear equations, linear equations system, solution of linear equations, determinants and matrices, besides the Laplacian development for determinants calculations of square matrices of order bigger than 3, then considering some of their general applications
Resumo:
Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos