963 resultados para Equações diferenciais não-lineares - Soluções numericas


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study sprang from the hypothesis that spatial variations in the morbidity rate for dengue fever within the municipality of Natal are related to intra-city socioeconomic and environmental variations. The objective of the project was to classify the different suburbs of Natal according to their living conditions and establish if there was any correlation between this classification and the incidence rate for dengue fever, with the aim of enabling public health planners to better control this disease. Data on population density, access to safe drinking water, rubbish collection, sewage disposal facilities, income level, education and the incidence of dengue fever during the years 2001 and 2003 was drawn from the Brazilian Demographic Census 2000 and from the Reportable Disease Notification System -SINAN. The study is presented here in the form of two papers, corresponding to the types of analysis performed: a classification of the urban districts into quartiles according to the living conditions which exist there, in the first article; and the incidence of dengue fever in each of these quartiles, in the second. By applying factorial analysis to the chosen socioeconomic and environmental indicators for the year 2000, a compound index of living condition (ICV) was obtained. On the basis of this index, it was possible to classify the urban districts into quartiles. On undertaking this grouping (paper 1), a heterogeneous distribution of living conditions was found across the city. As to the incidence rate for dengue fever (paper 2), it was discovered that the quartile identified as having the best living conditions presented incidence rates of 15.62 and 15.24 per 1000 inhabitants respectively in the years 2001 and 2003; whereas the quartile representing worst living conditions showed incidence rates of 25.10 and 10.32 for the comparable periods. The results suggest that dengue fever occurs in all social classes, and that its incidence is not related in any evident way to the chosen formula for living conditions

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation, after a brief review on the Einstein s General Relativity Theory and its application to the Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological models, we present and discuss the alternative theories of gravity dubbed f(R) gravity. These theories come about when one substitute in the Einstein-Hilbert action the Ricci curvature R by some well behaved nonlinear function f(R). They provide an alternative way to explain the current cosmic acceleration with no need of invoking neither a dark energy component, nor the existence of extra spatial dimensions. In dealing with f(R) gravity, two different variational approaches may be followed, namely the metric and the Palatini formalisms, which lead to very different equations of motion. We briefly describe the metric formalism and then concentrate on the Palatini variational approach to the gravity action. We make a systematic and detailed derivation of the field equations for Palatini f(R) gravity, which generalize the Einsteins equations of General Relativity, and obtain also the generalized Friedmann equations, which can be used for cosmological tests. As an example, using recent compilations of type Ia Supernovae observations, we show how the f(R) = R − fi/Rn class of gravity theories explain the recent observed acceleration of the universe by placing reasonable constraints on the free parameters fi and n. We also examine the question as to whether Palatini f(R) gravity theories permit space-times in which causality, a fundamental issue in any physical theory [22], is violated. As is well known, in General Relativity there are solutions to the viii field equations that have causal anomalies in the form of closed time-like curves, the renowned Gödel model being the best known example of such a solution. Here we show that every perfect-fluid Gödel-type solution of Palatini f(R) gravity with density and pressure p that satisfy the weak energy condition + p 0 is necessarily isometric to the Gödel geometry, demonstrating, therefore, that these theories present causal anomalies in the form of closed time-like curves. This result extends a theorem on Gödel-type models to the framework of Palatini f(R) gravity theory. We derive an expression for a critical radius rc (beyond which causality is violated) for an arbitrary Palatini f(R) theory. The expression makes apparent that the violation of causality depends on the form of f(R) and on the matter content components. We concretely examine the Gödel-type perfect-fluid solutions in the f(R) = R−fi/Rn class of Palatini gravity theories, and show that for positive matter density and for fi and n in the range permitted by the observations, these theories do not admit the Gödel geometry as a perfect-fluid solution of its field equations. In this sense, f(R) gravity theory remedies the causal pathology in the form of closed timelike curves which is allowed in General Relativity. We also examine the violation of causality of Gödel-type by considering a single scalar field as the matter content. For this source, we show that Palatini f(R) gravity gives rise to a unique Gödeltype solution with no violation of causality. Finally, we show that by combining a perfect fluid plus a scalar field as sources of Gödel-type geometries, we obtain both solutions in the form of closed time-like curves, as well as solutions with no violation of causality

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this dissertation is the development of a general formalism to analyze the thermodynamical properties of a photon gas under the context of nonlinear electrodynamics (NLED). To this end it is obtained, through the systematic analysis of Maxwell s electromagnetism (EM) properties, the general dependence of the Lagrangian that describes this kind of theories. From this Lagrangian and in the background of classical field theory, we derive the general dispersion relation that photons must obey in terms of a background field and the NLED properties. It is important to note that, in order to achieve this result, an aproximation has been made in order to allow the separation of the total electromagnetic field into a strong background electromagnetic field and a perturbation. Once the dispersion relation is in hand, the usual Bose-Einstein statistical procedure is followed through which the thermodynamical properties, energy density and pressure relations are obtained. An important result of this work is the fact that equation of state remains identical to the one obtained under EM. Then, two examples are made where the thermodynamic properties are explicitly derived in the context of two NLED, Born-Infelds and a quadratic approximation. The choice of the first one is due to the vast appearance in literature and, the second one, because it is a first order approximation of a large class of NLED. Ultimately, both are chosen because of their simplicity. Finally, the results are compared to EM and interpreted, suggesting possible tests to verify the internal consistency of NLED and motivating further developement into the formalism s quantum case

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we present a mathematical and computational modeling of electrokinetic phenomena in electrically charged porous medium. We consider the porous medium composed of three different scales (nanoscopic, microscopic and macroscopic). On the microscopic scale the domain is composed by a porous matrix and a solid phase. The pores are filled with an aqueous phase consisting of ionic solutes fully diluted, and the solid matrix consists of electrically charged particles. Initially we present the mathematical model that governs the electrical double layer in order to quantify the electric potential, electric charge density, ion adsorption and chemical adsorption in nanoscopic scale. Then, we derive the microscopic model, where the adsorption of ions due to the electric double layer and the reactions of protonation/ deprotanaç~ao and zeta potential obtained in modeling nanoscopic arise in microscopic scale through interface conditions in the problem of Stokes and Nerst-Planck equations respectively governing the movement of the aqueous solution and transport of ions. We developed the process of upscaling the problem nano/microscopic using the homogenization technique of periodic structures by deducing the macroscopic model with their respectives cell problems for effective parameters of the macroscopic equations. Considering a clayey porous medium consisting of kaolinite clay plates distributed parallel, we rewrite the macroscopic model in a one-dimensional version. Finally, using a sequential algorithm, we discretize the macroscopic model via the finite element method, along with the interactive method of Picard for the nonlinear terms. Numerical simulations on transient regime with variable pH in one-dimensional case are obtained, aiming computational modeling of the electroremediation process of clay soils contaminated

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objetivou-se com esse trabalho comparar estimativas de componentes de variâncias obtidas por meio de modelos lineares mistos Gaussianos e Robustos, via Amostrador de Gibbs, em dados simulados. Foram simulados 50 arquivos de dados com 1.000 animais cada um, distribuídos em cinco gerações, em dois níveis de efeito fixo e três valores fenotípicos distintos para uma característica hipotética, com diferentes níveis de contaminação. Exceto para os dados sem contaminação, quando os modelos foram iguais, o modelo Robusto apresentou melhores estimativas da variância residual. As estimativas de herdabilidade foram semelhantes em todos os modelos, mas as análises de regressão mostraram que os valores genéticos preditos com uso do modelo Robusto foram mais próximos dos valores genéticos verdadeiros. Esses resultados sugerem que o modelo linear normal contaminado oferece uma alternativa flexível para estimação robusta em melhoramento genético animal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUÇÃO: A hemiparesia após o acidente vascular encefálico (AVE) é a sequela mais frequente, prejudicando a velocidade de execução dos movimentos automáticos, diminuindo a autonomia do indivíduo e gerando incapacidade. OBJETIVOS: Analisar o efeito da espasticidade nos padrões lineares de marcha (PLM) em indivíduos hemiparéticos. MÉTODOS: Foram estudados dois grupos: 20 indivíduos com AVE (G1) e 20 indivíduos sadios, destros, sem sequela neurológica (G2), com média de idade de 54,2 e 52,6 anos respectivamente. Foram avaliados os PLM pelo protocolo de Nagazaki, o tônus muscular pela escala de Ashworth modificada e o arco de movimento por goniometria. Foi feita comparação dos parâmetros nos dois grupos pelo teste t de Student e correlação de Spearman com nível de significância de 5%. RESULTADOS: A média da distância foi de 14,52 m e 32,16 m, e o tempo foi de 23,75 s e 19,02 s no G1 e G2 respectivamente (p < 0,0001). Na comparação entre os grupos, a amplitude média de passo e a velocidade média foram estatisticamente significantes (p < 0,05) e a cadência não mostrou significância (p = 0,1936). Quando os PLM foram comparados com o grau de espasticidade dos músculos gastrocnêmio e sóleo, mostraram associação negativa com distância, amplitude de passo e velocidade e associação positiva com o tempo (p < 0,05). CONCLUSÃO: Quanto maior o grau de espasticidade dos músculos gastrocnêmio e sóleo, menores serão os parâmetros lineares de marcha do indivíduo com sequela de hemiparesia pós-AVE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

FUNDAMENTO: Ausência de estudos na literatura validando equações preditivas da frequência cardíaca máxima (FCmáx) em crianças e adolescentes. OBJETIVO: Analisar a validade das equações preditivas da FCmáx 220 - idade e 208 - (0,7 x idade) em meninos com idades entre 10 e 16 anos. MÉTODOS: Um teste progressivo de esforço máximo foi realizado em 69 meninos com idades entre 10 e 16 anos, aparentemente saudáveis e ativos. A velocidade inicial do teste foi de 9 km/h com incrementos de 1 km/h a cada três minutos. O teste foi mantido até a exaustão voluntária, considerando-se como FCmáx a maior frequência cardíaca atingida durante o teste. A FCmáx medida foi comparada com os valores preditos pelas equações 220 - idade e 208 - (0,7 x idade) através da ANOVA, medidas repetidas. Resultados: Os valores médios da FCmáx (bpm) foram: 200,2 ± 8,0 (medida), 207,4 ± 1,5 (220 - idade) e 199,2 ± 1,1 (208 - (0,7 x idade)). A FCmáx predita pela equação 220 - idade foi significantemente maior (p < 0,001) que a FCmáx medida e que a FCmáx predita pela equação (208 - (0,7 x idade)). A correlação entre a FCmáx medida e a idade não foi estatisticamente significativa (r = 0,096; p > 0,05). CONCLUSÃO: A equação 220 - idade superestimou a FCmáx medida e não se mostrou válida para essa população. A equação 208 - (0,7 x idade) se mostrou válida apresentando resultados bastante próximos da FCmáx medida. Estudos futuros estudos com amostras maiores poderão comprovar se a FCmáx não depende da idade para essa população, situação em que o valor constante de 200 bpm seria mais apropriado para a FCmáx.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste estudo foi verificar o efeito da seleção das cargas e do modelo utilizado para a determinação da PC no ergômetro de braço. Participaram do estudo oito voluntários do sexo masculino, que praticavam atividade física regularmente e eram aparentemente saudáveis. Os sujeitos realizaram quatro testes com cargas constantes mantidas até a exaustão voluntária no ergômetro de braço UBE 2462-Cybex. As cargas foram individualmente selecionadas para induzir a fadiga entre 1 e 15 minutos. Para cada sujeito, a determinação da PC foi realizada através de dois modelos lineares: potência-1/tempo e trabalho-tempo. em cada um dos modelos, foram utilizadas todas as potências (1), as três maiores (2) e as três menores (3). As PC encontradas no modelo potência-1/tempo e trabalho-tempo para a condição 3 (177,5 + 29,5; 173,9 + 33,3, respectivamente) foram significantemente menores do que as da condição 2 (190,5 + 23,2; 183,4 + 22,3, respectivamente), não existindo diferenças destas com as da condição 1 (184,2 + 25,4; 176,4 + 28,8, respectivamente). As PC determinadas no modelo potência-1/tempo para as condições 1 e 2 foram significantemente maiores do que as determinadas no modelo trabalho-tempo, não existindo diferenças para a condição 3. Pode-se concluir que as cargas selecionadas e o modelo utilizado interferem na determinação da PC encontrada no ergômetro de braço, podendo interferir no tempo de exaustão durante o exercício submáximo realizado em cargas relativas a este índice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os deslocamentos químicos de RMN 13C de carbonos a , b , g e d de 17 conjuntos de haletos (F, Cl Br e I) alifáticos, inclusive compostos mono, bi e tricíclicos, podem ser reproduzidos por uma equação linear de duas constantes e duas variáveis do tipo : d R-X = A*d R-X1 + B*d R-X2 onde A e B são constantes obtidas por regressão multilinear a partir de deslocamentos químicos de 13C; d R-X, o deslocamento químico de 13C do composto com halogênio (R-X); d R-X1 e d R-X2 deslocamentos químicos de outros haletos. Para brometos (R-X) alifáticos a melhor correlação foi obtida com os dados de fluoretos (R-X1) e iodetos (R-X2) com R2 de 0,9989 e desvio médio absoluto (DM) de 0,39ppm. Para cloretos (R-X) a melhor correlação foi com dados de brometos (R-X1) e iodetos (R-X2) com R2 de 0,9960 e DM de 0,76ppm. Para fluoretos (R-X) a melhor correlação foi com brometos (R-X1) e iodetos (R-X2) com R2 de 0,9977 e DM de 1,10ppm e para iodetos (R-X) foi com fluoretos (R-X1) e brometos (R-X2) com R2 de 0,9972 e desvio médio absoluto de 0,60 ppm.