1000 resultados para Equação hipsométrica
Resumo:
This work consists on the study of two important problems arising from the operations of petroleum and natural gas industries. The first problem the pipe dimensioning problem on constrained gas distribution networks consists in finding the least cost combination of diameters from a discrete set of commercially available ones for the pipes of a given gas network, such that it respects minimum pressure requirements at each demand node and upstream pipe conditions. On its turn, the second problem the piston pump unit routing problem comes from the need of defining the piston pump unit routes for visiting a number of non-emergent wells in on-shore fields, i.e., wells which don t have enough pressure to make the oil emerge to surface. The periodic version of this problem takes into account the wells re-filling equation to provide a more accurate planning in the long term. Besides the mathematical formulation of both problems, an exact algorithm and a taboo search were developed for the solution of the first problem and a theoretical limit and a ProtoGene transgenetic algorithm were developed for the solution of the second problem. The main concepts of the metaheuristics are presented along with the details of their application to the cited problems. The obtained results for both applications are promising when compared to theoretical limits and alternate solutions, either relative to the quality of the solutions or to associated running time
Resumo:
A degradação dos recursos naturais é talvez o principal problema da região do semiárido brasileiro, e essa degradação é principalmente resultante das perdas de solo, decorrente do processo erosivo. Na busca de melhor conhecer esta problemática vem sendo empregado o processo de modelagem ambiental, cujo objetivo é identificar e propor soluções para a degradação dos solos. Nesse sentido, o trabalho aplica o modelo da Equação Universal de Perda de Solos (EUPS), desenvolvido nos Estados Unidos ao longo da década de 1950, agregado as ferramentas de geoprocessamento, informações de sensoriamento remoto e Sistemas de Informações Geográficas (SIGs). A área de estudo é a Microbacia Riacho Passagem localizada na região oeste do Estado do Rio Grande do Norte, a microbacia tem uma área de 221,7Km² e esta inserida no semiárido, região Nordeste do Brasil. A metodologia utilizada consiste: em agrupar as variáveis da EUPS no ambiente SIG utilizando imagens de satélite, levantamentos bibliográficos e trabalhos de campo. Para determinação das extensões das vertentes foi empregado o Modelo RAMPA, e para adequar a EUPS as condições da área de estudo, foram realizados ajuste através de modelos estatísticos, aperfeiçoando o trabalho e os resultados gerados pelo modelo. Ao fim do processo foi desenvolvida uma pseudo linguagem no aplicativo Linguagem Espacial para Geoprocessamento Algébrico (LEGAL) disponível no software SPRING versão 5.1.2 servindo de suporte para o processamento das informações contidas no banco de dados, base da EUPS. Os resultados demonstram que inicialmente é necessário delimitar com precisão o período seco e chuvoso, informação fundamental para a EUPS, uma vez que o trabalho busca identificar a perda de solo por erosão hídrica. O modelo RAMPA apresentou-se satisfatório e com elevado potencial de aplicação na determinação dos comprimentos de vertentes utilizando imagens de radar. Quanto ao comportamento das extensões de vertentes, na microbacia, o mesmo apresentou uma pequena variação na porção leste, maiores vertentes, área próxima a desembocadura. Após a aplicação do modelo o valor máximo de perda de solo foi 88 ton/ha.ano com núcleos localizados no NEOSSOLOS LITÓLICOS e o mínimo 0,01 ton/ha.ano localizado no domínio dos LATOSSOLOS e NEOSSOLOS FLÚVICOS. A erosão provoca diminuição do perfil de solo, principalmente nos NEOSSOLOS LITÓLICOS, resultando em alteração no balanço hídrico e conseqüentemente aumento da temperatura do solo, podendo desencadear a desertificação. Os resultados e a metodologia do presente trabalho poderão ser aplicados na busca pelo desenvolvimento sustentável, na região do semiárido brasileiro, auxiliando na compreensão do binômio uso do solo e capacidade de suporte do meio natural.
Resumo:
OBJETIVO: Testar os efeitos de uma dieta com baixo teor de gordura comparada a uma dieta com gordura de babaçu sobre o estado nutricional em ratos jovens com colestase obstrutiva. MÉTODOS: Submetemos 40 ratos divididos em quatro grupos de 10 animais a partir do P21 (21º dia pós-natal) até o P49 a dois dos seguintes tratamentos: ligadura e ressecção do ducto biliar comum ou operação simulada e dieta com baixo teor de gordura (óleo de milho fornecendo 4,5% da quantidade total de calorias) ou dieta com gordura de babaçu (essa gordura fornecendo 32,7% e óleo de milho fornecendo 1,7% da quantidade total de calorias). Foi mensurado o ganho de peso a cada 4 dias do P25 ao P49. A função de crescimento de Verhulst foi ajustada aos valores de ganho de peso. A velocidade e a aceleração de crescimento nos mesmos momentos foram estimadas usando a mesma equação. Foram mensurados: quantidade de ração ingerida e ingestão energética total do P21 ao P49, utilização de energia do P25 ao P49, gordura absorvida e balanço de nitrogênio (BN) do P42 ao P49. A ANOVA com dois fatores e o método de S.N.K para comparações pareadas foram utilizados para estudar os efeitos, sobre as variáveis, da colestase e das dietas e sua interação (p<0,05). RESULTADOS: em ratos com colestase e dieta com baixo teor de gordura, houve maior velocidade de crescimento no P45, maior aceleração de crescimento no P41 e P45, maior utilização de energia, maior percentual de gordura absorvida e maior BN do que em ratos com colestase e dieta com gordura de babaçu. CONCLUSÃO: A dieta com baixo teor de gordura atenua a restrição de crescimento provocada pela colestase e proporciona melhor aproveitamento da dieta e maior incorporação da proteína ingerida do que a dieta com gordura de babaçu.
Resumo:
In this work we study, for two different growth directions, multilayers of nanometric magnetic metallic lms grown, using Fibonacci sequences, in such a way that the thickness of the non-magnetic spacer may vary from a pair of lms to another. We applied a phenomenological theory that uses the magnetic energy to describe the behavior of the system. After we found numerically the global minimum of the total energy, we used the equilibrium angles to obtain magnetization and magnetoresistance curves. Next, we solved the equation of motion of the multilayers to nd the dispersion relation for the system. The results show that, when spacers are used with thickness so that the biquadratic coupling is strong in comparison to the bilinear one, non usual behaviors for both magnetization and magnetoresistance are observed. For example, a dependence on the parity of the Fibonacci generation utilized for constructing the system, a low magnetoresistance step in low external magnetic fields and regions that show high sensibility to small variations of the applied field. Those behaviors are not present in quasiperiodic magnetic multilayers with constant spacer thickness
Resumo:
Currently the interest in large-scale systems with a high degree of complexity has been much discussed in the scientific community in various areas of knowledge. As an example, the Internet, protein interaction, collaboration of film actors, among others. To better understand the behavior of interconnected systems, several models in the area of complex networks have been proposed. Barabási and Albert proposed a model in which the connection between the constituents of the system could dynamically and which favors older sites, reproducing a characteristic behavior in some real systems: connectivity distribution of scale invariant. However, this model neglects two factors, among others, observed in real systems: homophily and metrics. Given the importance of these two terms in the global behavior of networks, we propose in this dissertation study a dynamic model of preferential binding to three essential factors that are responsible for competition for links: (i) connectivity (the more connected sites are privileged in the choice of links) (ii) homophily (similar connections between sites are more attractive), (iii) metric (the link is favored by the proximity of the sites). Within this proposal, we analyze the behavior of the distribution of connectivity and dynamic evolution of the network are affected by the metric by A parameter that controls the importance of distance in the preferential binding) and homophily by (characteristic intrinsic site). We realized that the increased importance as the distance in the preferred connection, the connections between sites and become local connectivity distribution is characterized by a typical range. In parallel, we adjust the curves of connectivity distribution, for different values of A, the equation P(k) = P0e
Resumo:
The objective of this dissertation is the development of a general formalism to analyze the thermodynamical properties of a photon gas under the context of nonlinear electrodynamics (NLED). To this end it is obtained, through the systematic analysis of Maxwell s electromagnetism (EM) properties, the general dependence of the Lagrangian that describes this kind of theories. From this Lagrangian and in the background of classical field theory, we derive the general dispersion relation that photons must obey in terms of a background field and the NLED properties. It is important to note that, in order to achieve this result, an aproximation has been made in order to allow the separation of the total electromagnetic field into a strong background electromagnetic field and a perturbation. Once the dispersion relation is in hand, the usual Bose-Einstein statistical procedure is followed through which the thermodynamical properties, energy density and pressure relations are obtained. An important result of this work is the fact that equation of state remains identical to the one obtained under EM. Then, two examples are made where the thermodynamic properties are explicitly derived in the context of two NLED, Born-Infelds and a quadratic approximation. The choice of the first one is due to the vast appearance in literature and, the second one, because it is a first order approximation of a large class of NLED. Ultimately, both are chosen because of their simplicity. Finally, the results are compared to EM and interpreted, suggesting possible tests to verify the internal consistency of NLED and motivating further developement into the formalism s quantum case
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
We use a tight-binding formulation to investigate the transmissivity and the currentvoltage (I_V) characteristics of sequences of double-strand DNA molecules. In order to reveal the relevance of the underlying correlations in the nucleotides distribution, we compare theresults for the genomic DNA sequence with those of arti_cial sequences (the long-range correlated Fibonacci and RudinShapiro one) and a random sequence, which is a kind of prototype of a short-range correlated system. The random sequence is presented here with the same _rst neighbors pair correlations of the human DNA sequence. We found that the long-range character of the correlations is important to the transmissivity spectra, although the I_V curves seem to be mostly inuenced by the short-range correlations. We also analyze in this work the electronic and thermal properties along an _-helix sequence obtained from an _3 peptide which has the uni-dimensional sequence (Leu-Glu-Thr- Leu-Ala-Lys-Ala)3. An ab initio quantum chemical calculation procedure is used to obtain the highest occupied molecular orbital (HOMO) as well as their charge transfer integrals, when the _-helix sequence forms two di_erent variants with (the so-called 5Q variant) and without (the 7Q variant) _brous assemblies that can be observed by transmission electron microscopy. The di_erence between the two structures is that the 5Q (7Q) structure have Ala ! Gln substitution at the 5th (7th) position, respectively. We estimate theoretically the density of states as well as the electronic transmission spectra for the peptides using a tight-binding Hamiltonian model together with the Dyson's equation. Besides, we solve the time dependent Schrodinger equation to compute the spread of an initially localized wave-packet. We also compute the localization length in the _nite _-helix segment and the quantum especi_c heat. Keeping in mind that _brous protein can be associated with diseases, the important di_erences observed in the present vi electronic transport studies encourage us to suggest this method as a molecular diagnostic tool
Resumo:
In this work we have developed a way to grow Fe/MgO(100) monocrystals by magnetron sputtering DC. We investigated the growing in a temperature range among 100 oC and 300 oC. Structural and magneto-crystalline properties were studied by different experimental techniques. Thickness and surface roughness of the films were investigated by atomic force microscopy, while magneto-crystalline properties were investigated by magneto-optical Kerr effect and ferromagnetic resonance. Our results show that as we increase the deposition temperature, the magneto-crystalline anisotropy of the films also increases, following the equation of Avrami. The best temperature value to make a film is 300 oC. As the main result, we built a base of magnetoresistence devices and as an aplication, we present measurements of Fe/Cr/Fe trilayer coupling. In a second work we investigated the temperature dependence of the first three interlayer spacings of Ag(100) surface using low energy electron diffraction. A linear expansion model of crystal surface was used and the values of Debye temperatures of the first two layers and thermal expansion coefficient were determinated. A relaxation of 1% was found for Ag(100) surface and these results are matched with faces (110) and (111) of the silver. iv
Resumo:
Considering a quantum gas, the foundations of standard thermostatistics are investigated in the context of non-Gaussian statistical mechanics introduced by Tsallis and Kaniadakis. The new formalism is based on the following generalizations: i) Maxwell- Boltzmann-Gibbs entropy and ii) deduction of H-theorem. Based on this investigation, we calculate a new entropy using a generalization of combinatorial analysis based on two different methods of counting. The basic ingredients used in the H-theorem were: a generalized quantum entropy and a generalization of collisional term of Boltzmann equation. The power law distributions are parameterized by parameters q;, measuring the degree of non-Gaussianity of quantum gas. In the limit q
Resumo:
In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered
Resumo:
Among several theorems which are taught in basic education some of them can be proved in the classroom and others do not, because the degree of difficulty of its formal proof. A classic example is the Fundamental Theorem of Algebra which is not proved, it is necessary higher-level knowledge in mathematics. In this paper, we justify the validity of this theorem intuitively using the software Geogebra. And, based on [2] we will present a clear formal proof of this theorem that is addressed to school teachers and undergraduate students in mathematics
Resumo:
In this work we obtain nickel ferrite by the combustion synthesis method whcih involves synthesising in an oven at temperatures of 750oC, 950oC and 125oC. The precursors oxidizing used were nickel nitrate, ferric as an oxidizing and reducing urea (fuel). After obtaining the mixture, the product was deagglomerated and past through a 270 mesh sieve. To assess the structure, morphology, particle size, magnetic and electrical properties of nanoparticles obtained the samples were sintered and characterized by x-ray distraction (XRD), x-ray fluorescence spectroscopy (FRX); scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), vibrating sample magnetometer (MAV ) and electrical permittivity. The results indicated the majority of phase inverse spinel ferrite and Hematite secondary phase nickel and nickel oxide. Through the intensity of the distraction, the average size of the crystallization peaks were half-height width which was calculated using the Scherrer equation. From observing the peaks of all the reflections, it appears that samples are crystal clear with the formation of nanoparticles. Morphologically, the nanoferritas sintered nickel pellet formation was observed with three systems of particle size below 100mn, which favored the formation of soft pellets. The average size of the grains in their micrometric scale. FRX and EDS showed qualitatively the presence of iron elements nickel and oxygen, where through quantitative data we can observe the presence of the secondary phase. The magnetic properties and the saturation magnetization and the coercive field are in accordance with the nickel, ferrite where the curve of hysteresis has aspects of a soft material. Dielectric constant values are below 10 and low tangent loss
Resumo:
This study includes the results of the analysis of areas susceptible to degradation by remote sensing in semi-arid region, which is a matter of concern and affects the whole population and the catalyst of this process occurs by the deforestation of the savanna and improper practices by the use of soil. The objective of this research is to use biophysical parameters of the MODIS / Terra and images TM/Landsat-5 to determine areas susceptible to degradation in semi-arid Paraiba. The study area is located in the central interior of Paraíba, in the sub-basin of the River Taperoá, with average annual rainfall below 400 mm and average annual temperature of 28 ° C. To draw up the map of vegetation were used TM/Landsat-5 images, specifically, the composition 5R4G3B colored, commonly used for mapping land use. This map was produced by unsupervised classification by maximum likelihood. The legend corresponds to the following targets: savanna vegetation sparse and dense, riparian vegetation and exposed soil. The biophysical parameters used in the MODIS were emissivity, albedo and vegetation index for NDVI (NDVI). The GIS computer programs used were Modis Reprojections Tools and System Information Processing Georeferenced (SPRING), which was set up and worked the bank of information from sensors MODIS and TM and ArcGIS software for making maps more customizable. Initially, we evaluated the behavior of the vegetation emissivity by adapting equation Bastiaanssen on NDVI for spatialize emissivity and observe changes during the year 2006. The albedo was used to view your percentage of increase in the periods December 2003 and 2004. The image sensor of Landsat TM were used for the month of December 2005, according to the availability of images and in periods of low emissivity. For these applications were made in language programs for GIS Algebraic Space (LEGAL), which is a routine programming SPRING, which allows you to perform various types of algebras of spatial data and maps. For the detection of areas susceptible to environmental degradation took into account the behavior of the emissivity of the savanna that showed seasonal coinciding with the rainy season, reaching a maximum emissivity in the months April to July and in the remaining months of a low emissivity . With the images of the albedo of December 2003 and 2004, it was verified the percentage increase, which allowed the generation of two distinct classes: areas with increased variation percentage of 1 to 11.6% and the percentage change in areas with less than 1 % albedo. It was then possible to generate the map of susceptibility to environmental degradation, with the intersection of the class of exposed soil with varying percentage of the albedo, resulting in classes susceptibility to environmental degradation
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)