906 resultados para numerical computation
Resumo:
Predicting and mapping productivity areas allows crop producers to improve their planning of agricultural activities. The primary aims of this work were the identification and mapping of specific management areas allowing coffee bean quality to be predicted from soil attributes and their relationships to relief. The study area was located in the Southeast of the Minas Gerais state, Brazil. A grid containing a total of 145 uniformly spaced nodes 50 m apart was established over an area of 31. 7 ha from which samples were collected at depths of 0. 00-0. 20 m in order to determine physical and chemical attributes of the soil. These data were analysed in conjunction with plant attributes including production, proportion of beans retained by different sieves and drink quality. The results of principal component analysis (PCA) in combination with geostatistical data showed the attributes clay content and available iron to be the best choices for identifying four crop production environments. Environment A, which exhibited high clay and available iron contents, and low pH and base saturation, was that providing the highest yield (30. 4l ha-1) and best coffee beverage quality (61 sacks ha-1). Based on the results, we believe that multivariate analysis, geostatistics and the soil-relief relationships contained in the digital elevation model (DEM) can be effectively used in combination for the hybrid mapping of areas of varying suitability for coffee production. © 2012 Springer Science+Business Media New York.
Resumo:
In this paper we present a finite difference MAC-type approach for solving three-dimensional viscoelastic incompressible free surface flows governed by the eXtended Pom-Pom (XPP) model, considering a wide range of parameters. The numerical formulation presented in this work is an extension to three-dimensions of our implicit technique [Journal of Non-Newtonian Fluid Mechanics 166 (2011) 165-179] for solving two-dimensional viscoelastic free surface flows. To enhance the stability of the numerical method, we employ a combination of the projection method with an implicit technique for treating the pressure on the free surfaces. The differential constitutive equation of the fluid is solved using a second-order Runge-Kutta scheme. The numerical technique is validated by performing a mesh refinement study on a pipe flow, and the numerical results presented include the simulation of two complex viscoelastic free surface flows: extrudate-swell problem and jet buckling phenomenon. © 2013 Elsevier B.V.
Resumo:
Insect pest phylogeography might be shaped both by biogeographic events and by human influence. Here, we conducted an approximate Bayesian computation (ABC) analysis to investigate the phylogeography of the New World screwworm fly, Cochliomyia hominivorax, with the aim of understanding its population history and its order and time of divergence. Our ABC analysis supports that populations spread from North to South in the Americas, in at least two different moments. The first split occurred between the North/Central American and South American populations in the end of the Last Glacial Maximum (15,300-19,000 YBP). The second split occurred between the North and South Amazonian populations in the transition between the Pleistocene and the Holocene eras (9,100-11,000 YBP). The species also experienced population expansion. Phylogenetic analysis likewise suggests this north to south colonization and Maxent models suggest an increase in the number of suitable areas in South America from the past to present. We found that the phylogeographic patterns observed in C. hominivorax cannot be explained only by climatic oscillations and can be connected to host population histories. Interestingly we found these patterns are very coincident with general patterns of ancient human movements in the Americas, suggesting that humans might have played a crucial role in shaping the distribution and population structure of this insect pest. This work presents the first hypothesis test regarding the processes that shaped the current phylogeographic structure of C. hominivorax and represents an alternate perspective on investigating the problem of insect pests. © 2013 Fresia et al.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Pós-graduação em Física - IFT
Resumo:
Desenvolvemos a modelagem numérica de dados sintéticos Marine Controlled Source Electromagnetic (MCSEM) usada na exploração de hidrocarbonetos para simples modelos tridimensionais usando computação paralela. Os modelos são constituidos de duas camadas estrati cadas: o mar e o sedimentos encaixantes de um delgado reservatório tridimensional, sobrepostas pelo semi-espaço correspondente ao ar. Neste Trabalho apresentamos uma abordagem tridimensional da técnica dos elementos nitos aplicada ao método MCSEM, usando a formulação da decomposição primária e secundária dos potenciais acoplados magnético e elétrico. Num pós-processamento, os campos eletromagnéticos são calculados a partir dos potenciais espalhados via diferenciação numérica. Exploramos o paralelismo dos dados MCSEM 3D em um levantamento multitransmissor, em que para cada posição do transmissor temos o mesmo processo de cálculos com dados diferentes. Para isso, usamos a biblioteca Message Passing Interface (MPI) e o modelo servidor cliente, onde o processador administrador envia os dados de entradas para os processadores clientes computar a modelagem. Os dados de entrada são formados pelos parâmetros da malha de elementos nitos, dos transmissores e do modelo geoelétrico do reservatório. Esse possui geometria prismática que representa lentes de reservatórios de hidrocarbonetos em águas profundas. Observamos que quando a largura e o comprimento horizontais desses reservatório têm a mesma ordem de grandeza, as resposta in-line são muito semelhantes e conseqüentemente o efeito tridimensional não é detectado. Por sua vez, quando a diferença nos tamanhos da largura e do comprimento do reservatório é signi cativa o efeito 3D é facilmente detectado em medidas in-line na maior dimensão horizontal do reservatório. Para medidas na menor dimensão esse efeito não é detectável, pois, nesse caso o modelo 3D se aproxima de um modelo bidimensional. O paralelismo dos dados é de rápida implementação e processamento. O tempo de execução para a modelagem multitransmissor em ambiente paralelo é equivalente ao tempo de processamento da modelagem para um único transmissor em uma máquina seqüêncial, com o acréscimo do tempo de latência na transmissão de dados entre os nós do cluster, o que justi ca o uso desta metodologia na modelagem e interpretação de dados MCSEM. Devido a reduzida memória (2 Gbytes) em cada processador do cluster do departamento de geofísica da UFPA, apenas modelos muito simples foram executados.
Resumo:
Apresentamos dois algoritmos automáticos, os quais se utilizam do método dos mínimos quadrados de Wiener-Hopf, para o cálculo de filtros lineares digitais para as transformadas seno, co-seno e de Hankel J0, J1 e J2. O primeiro, que otimiza os parâmetros: incremento das abscissas, abscissa inicial e o fator de deslocamento utilizados para os cálculos dos coeficientes dos filtros lineares digitais que são aferidos através de transformadas co-seno, seno e o segundo, que otimiza os parâmetros: incremento das abscissas e abscissa inicial utilizados para os cálculos dos coeficientes dos filtros lineares digitais que são aferidos através de transformadas de Hankel J0, J1 e J2. Esses algoritmos levaram às propostas de novos filtros lineares digitais de 19, 30 e 40 pontos para as transformadas co-seno e seno e de novos filtros otimizados de 37 , 27 e 19 pontos para as transformadas J0, J1 e J2, respectivamente. O desempenho dos novos filtros em relação aos filtros existentes na literatura geofísica é avaliado usando-se um modelo geofísico constituído por dois semi-espaços. Como fonte usou-se uma linha infinita de corrente entre os semi-espaços originando, desta forma, transformadas co-seno e seno. Verificou-se melhores desempenhos na maioria das simulações usando o novo filtro co-seno de 19 pontos em relação às simulações usando o filtro co-seno de 19 pontos existente na literatura. Verificou-se também a equivalência de desempenhos nas simulações usando o novo filtro seno de 19 pontos em relação às simulações usando o filtro seno de 20 pontos existente na literatura. Adicionalmente usou-se também como fonte um dipolo magnético vertical entre os semi-espaços originando desta forma, transformadas J0 e J1, verificando-se melhores desempenhos na maioria das simulações usando o novo filtro J1 de 27 pontos em relação ao filtro J1 de 47 pontos existente na literatura. Verificou-se também a equivalência de desempenhos na maioria das simulações usando o novo filtro J0 de 37 pontos em relação ao filtro J0 de 61 pontos existente na literatura. Usou-se também como fonte um dipolo magnético horizontal entre os semi-espaços, verificando-se um desempenho análogo ao que foi descrito anteriormente dos novos filtros de 37 e 27 pontos para as respectivas transformadas J0 e J1 em relação aos filtros de 61 e 47 pontos existentes na literatura, destas respectivas transformadas. Finalmente verificou-se a equivalência de desempenhos entre os novos filtros J0 de 37 pontos e J1 de 27 pontos em relação aos filtros de 61 e 47 pontos existentes na literatura destas transformadas, respectivamente, quando aplicados em modelos de sondagens elétricas verticais (Wenner e Schlumberger). A maioria dos nossos filtros contêm poucos coeficientes quando comparados àqueles geralmente usados na geofísica. Este aspecto é muito importante porque transformadas utilizando filtros lineares digitais são usadas maciçamente em problemas numéricos geofísicos.
Resumo:
In this paper, we propose a hybrid methodology based on Graph-Coloring and Genetic Algorithm (GA) to solve the Wavelength Assignment (WA) problem in optical networks, impaired by physical layer effects. Our proposal was developed for a static scenario where the physical topology and traffic matrix are known a priori. First, we used fixed shortest-path routing to attend demand requests over the physical topology and the graph-coloring algorithm to minimize the number of necessary wavelengths. Then, we applied the genetic algorithm to solve WA. The GA finds the wavelength activation order on the wavelengths grid with the aim of reducing the Cross-Phase Modulation (XPM) effect; the variance due to the XPM was used as a function of fitness to evaluate the feasibility of the selected WA solution. Its performance is compared with the First-Fit algorithm in two different scenarios, and has shown a reduction in blocking probability up to 37.14% when considered both XPM and residual dispersion effects and up to 71.42% when only considered XPM effect. Moreover, it was possible to reduce by 57.14% the number of wavelengths.
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
Bio-molecular computing, 'computations performed by bio-molecules', is already challenging traditional approaches to computation both theoretically and technologically. Often placed within the wider context of ´bio-inspired' or 'natural' or even 'unconventional' computing, the study of natural and artificial molecular computations is adding to our understanding of biology, physical sciences and computer science well beyond the framework of existing design and implementation paradigms. In this introduction, We wish to outline the current scope of the field and assemble some basic arguments that, bio-molecular computation is of central importance to computer science, physical sciences and biology using HOL - Higher Order Logic. HOL is used as the computational tool in our R&D work. DNA was analyzed as a chemical computing engine, in our effort to develop novel formalisms to understand the molecular scale bio-chemical computing behavior using HOL. In our view, our focus is one of the pioneering efforts in this promising domain of nano-bio scale chemical information processing dynamics.
Resumo:
This paper presents numerical modeling of a turbulent natural gas flow through a non-premixed industrial burner of a slab reheating furnace. The furnace is equipped with diffusion side swirl burners capable of utilizing natural gas or coke oven gas alternatively through the same nozzles. The study is focused on one of the burners of the preheating zone. Computational Fluid Dynamics simulation has been used to predict the burner orifice turbulent flow. Flow rate and pressure at burner upstream were validated by experimental measurements. The outcomes of the numerical modeling are analyzed for the different turbulence models in terms of pressure drop, velocity profiles, and orifice discharge coefficient. The standard, RNG, and Realizable k-epsilon models and Reynolds Stress Model (RSM) have been used. The main purpose of the numerical investigation is to determine the turbulence model that more consistently reproduces the experimental results of the flow through an industrial non-premixed burner orifice. The comparisons between simulations indicate that all the models tested satisfactorily and represent the experimental conditions. However, the Realizable k-epsilon model seems to be the most appropriate turbulence model, since it provides results that are quite similar to the RSM and RNG k-epsilon models, requiring only slightly more computational power than the standard k-epsilon model. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
A number of studies have demonstrated that simple elastic network models can reproduce experimental B-factors, providing insights into the structure-function properties of proteins. Here, we report a study on how to improve an elastic network model and explore its performance by predicting the experimental B-factors. Elastic network models are built on the experimental C coordinates, and they only take the pairs of C atoms within a given cutoff distance r(c) into account. These models describe the interactions by elastic springs with the same force constant. We have developed a method based on numerical simulations with a simple coarse-grained force field, to attribute weights to these spring constants. This method considers the time that two C atoms remain connected in the network during partial unfolding, establishing a means of measuring the strength of each link. We examined two different coarse-grained force fields and explored the computation of these weights by unfolding the native structures. Proteins 2014; 82:119-129. (c) 2013 Wiley Periodicals, Inc.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a 'blind injection challenge' similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs' angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M circle dot + 10M circle dot (50M circle dot + 50M circle dot) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to similar to 15% for 50M circle dot + 50M circle dot BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.