238 resultados para Interpolação
Resumo:
Knowing the annual climatic conditions is of great importance for appropriate planning in agriculture. However, the systems of climatic classification are not widely used in agricultural studies because of the wide range of scales in which they are used. A series with data from 20 years of observations from 45 climatological stations in all over the state of Pernambuco was used. The probability density function of the incomplete gamma distribution was used to evaluate the occurrence of dry, regular and rainy years. The monthly climatic water balance was estimated using the Thornthwaite and Mather method (1955), and based on those findings, the climatic classifications were performed using the Thornthwaite (1948) and Thornthwaite and Mather (1955) for each site. The method of Kriging interpolation was used for the spatialization of the results. The study classifications were very sensitive to the local reliefs, to the amount of rainfall, and to the temperatures of the regions resulting in a wide number of climatic types. The climatic classification system of Thornthwaite and Mather (1955) allowed efficient classification of climates and a clearer summary of the information provided. In so doing, it demonstrated its capability to determine agro climatic zones.
Resumo:
This work explores the suitability of the Lagrange interpolating polynomial as a tool to estimate and correct solar databases. From the knowledge of the irradiance distribution over a day, a portion of it was removed for applying Lagrange interpolation polynomial. After generation of the estimates by interpolation, the assessment was made by MBE and RMSE statistical indicators. The application of Lagrange interpolating generated the following results: underestimation of 0.27% (MBE = -1.83 W/m2) and scattering of 0.51% (RMSE = 3.48 W/m2).
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Geografia - FCT
Resumo:
In this work, the author looks forward to develop a new method capable of incorporate the concepts of the Reliability Theory and Ruin Probability in Deep Foundations, in order to do a better quantification of the uncertainties, which is intrinsic in all geotechnical projects, meanly because we don't know all the properties of the materials that we work with. Using the methodologies of Decourt Quaresma and David Cabral, resistance surfaces have been developed utilizing the data achieved from the Standard Penetration Tests performed in the field of study, in conjecture with the loads defined in the executive project of the piles. The construction of resistance surfaces shows to be a very useful tool for decision making, no matter in which phase it is current on, projecting or execution. The surfaces were developed by Kriging (using the software Surfer® 12), making it easier to visualize the geotechnical profile of the field of study. Comparing the results, the conclusion was that a high safety factor doesn't mean higher security. It is fundamental to consider the loads and resistance of the piles in the whole field, carefully choosing the project methodology responsible to define the diameter and length of the piles
Resumo:
In this work, the author looks forward to develop a new method capable of incorporate the concepts of the Reliability Theory and Ruin Probability in Deep Foundations, in order to do a better quantification of the uncertainties, which is intrinsic in all geotechnical projects, meanly because we don't know all the properties of the materials that we work with. Using the methodologies of Decourt Quaresma and David Cabral, resistance surfaces have been developed utilizing the data achieved from the Standard Penetration Tests performed in the field of study, in conjecture with the loads defined in the executive project of the piles. The construction of resistance surfaces shows to be a very useful tool for decision making, no matter in which phase it is current on, projecting or execution. The surfaces were developed by Kriging (using the software Surfer® 12), making it easier to visualize the geotechnical profile of the field of study. Comparing the results, the conclusion was that a high safety factor doesn't mean higher security. It is fundamental to consider the loads and resistance of the piles in the whole field, carefully choosing the project methodology responsible to define the diameter and length of the piles
Resumo:
Yield mapping represents the spatial variability concerning the features of a productive area and allows intervening on the next year production, for example, on a site-specific input application. The trial aimed at verifying the influence of a sampling density and the type of interpolator on yield mapping precision to be produced by a manual sampling of grains. This solution is usually adopted when a combine with yield monitor can not be used. An yield map was developed using data obtained from a combine equipped with yield monitor during corn harvesting. From this map, 84 sample grids were established and through three interpolators: inverse of square distance, inverse of distance and ordinary kriging, 252 yield maps were created. Then they were compared with the original one using the coefficient of relative deviation (CRD) and the kappa index. The loss regarding yield mapping information increased as the sampling density decreased. Besides, it was also dependent on the interpolation method used. A multiple regression model was adjusted to the variable CRD, according to the following variables: spatial variability index and sampling density. This model aimed at aiding the farmer to define the sampling density, thus, allowing to obtain the manual yield mapping, during eventual problems in the yield monitor.
Resumo:
Os dados de sensoriamento remoto em campo podem fornecer informações detalhadas sobre a variabilidade de parâmetros biofísicos ligados à produtividade em grandes áreas e apresentam potencial para o monitoramento destes parâmetros, ao longo de todo o ciclo de desenvolvimento da cultura. Este trabalho objetivou mapear a variabilidade espacial do índice de vegetação da diferença normalizada (NDVI) e seus componentes, em duas lavouras comerciais de algodão (Gossipium hirsutum L.), utilizando sensor óptico ativo, em nível terrestre. Os dados foram coletados utilizando-se sensor instalado em um pulverizador autopropelido agrícola. Um receptor GPS foi acoplado ao sensor, para a obtenção das coordenadas dos pontos de amostragem. As leituras foram realizadas em faixas espaçadas em 21,0 m, aproveitando-se as passadas do veículo no momento da pulverização de agroquímicos, e os dados submetidos à análise estatística clássica e geoestatística. Mapas de distribuição espacial das variáveis foram elaborados pela interpolação por krigagem. Observou-se maior variabilidade espacial do NDVI e da reflectância espectral da vegetação na região do infravermelho próximo (IVP) (880 nm) e do visível (590 nm) na lavoura com maior estresse fisiológico, devido ao ataque do percevejo castanho [Scaptocoris castanea (Hem.: Cydnidae)], em relação à lavoura sadia.
Resumo:
Propõe-se método novo e completo para análise de acetona em ar exalado envolvendo coleta com pré-concentração em água, derivatização química e determinação eletroquímica assistida por novo algoritmo de processamento de sinais. Na literatura recente a acetona expirada vem sendo avaliada como biomarcador para monitoramento não invasivo de quadros clínicos como diabetes e insuficiência cardíaca, daí a importância da proposta. Entre as aminas que reagem com acetona para formar iminas eletroativas, estudadas por polarografia em meados do século passado, a glicina apresentou melhor conjunto de características para a definição do método de determinação por voltametria de onda quadrada sem a necessidade de remoção de oxigênio (25 Hz, amplitude de 20 mV, incremento de 5 mV, eletrodo de gota de mercúrio). O meio reacional, composto de glicina (2 mol·L-1) em meio NaOH (1 mol·L-1), serviu também de eletrólito e o pico de redução da imina em -1,57 V vs. Ag|AgCl constituiu o sinal analítico. Para tratamento dos sinais, foi desenvolvido e avaliado um algoritmo inovador baseado em interpolação de linha base por ajuste de curvas de Bézier e ajuste de gaussiana ao pico. Essa combinação permitiu reconhecimento e quantificação de picos relativamente baixos e largos sobre linha com curvatura acentuada e ruído, situação em que métodos convencionais falham e curvas do tipo spline se mostraram menos apropriadas. A implementação do algoritmo (disponível em http://github.com/batistagl/chemapps) foi realizada utilizando programa open source de álgebra matricial integrado diretamente com software de controle do potenciostato. Para demonstrar a generalidade da extensão dos recursos nativos do equipamento mediante integração com programação externa em linguagem Octave (open source), implementou-se a técnica da cronocoulometria tridimensional, com visualização de resultados já tratados em projeções de malha de perspectiva 3D sob qualquer ângulo. A determinação eletroquímica de acetona em fase aquosa, assistida pelo algoritmo baseado em curvas de Bézier, é rápida e automática, tem limite de detecção de 3,5·10-6 mol·L-1 (0,2 mg·L-1) e faixa linear que atende aos requisitos da análise em ar exalado. O acetaldeído, comumente presente em ar exalado, em especial, após consumo de bebidas alcoólicas, dá origem a pico voltamétrico em -1,40 V, contornando interferência que prejudica vários outros métodos publicados na literatura e abrindo possibilidade de determinação simultânea. Resultados obtidos com amostras reais são concordantes com os obtidos por método espectrofotométrico, em uso rotineiro desde o seu aperfeiçoamento na dissertação de mestrado do autor desta tese. Em relação à dissertação, também se otimizou a geometria do dispositivo de coleta, de modo a concentrar a acetona num volume menor de água gelada e prover maior conforto ao paciente. O método completo apresentado, englobando o dispositivo de amostragem aperfeiçoado e o novo e efetivo algoritmo para tratamento automático de sinais voltamétricos, está pronto para ser aplicado. Evolução para um analisador portátil depende de melhorias no limite de detecção e facilidade de obtenção eletrodos sólidos (impressos) com filme de mercúrio, vez que eletrodos de bismuto ou diamante dopado com boro, entre outros, não apresentaram resposta.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
The energy demand for operating Information and Communication Technology (ICT) systems has been growing, implying in high operational costs and consequent increase of carbon emissions. Both in datacenters and telecom infrastructures, the networks represent a significant amount of energy spending. Given that, there is an increased demand for energy eficiency solutions, and several capabilities to save energy have been proposed. However, it is very dificult to orchestrate such energy eficiency capabilities, i.e., coordinate or combine them in the same network, ensuring a conflict-free operation and choosing the best one for a given scenario, ensuring that a capability not suited to the current bandwidth utilization will not be applied and lead to congestion or packet loss. Also, there is no way in the literature to do this taking business directives into account. In this regard, a method able to orchestrate diferent energy eficiency capabilities is proposed considering the possible combinations and conflicts among them, as well as the best option for a given bandwidth utilization and network characteristics. In the proposed method, the business policies specified in a high-level interface are refined down to the network level in order to bring highlevel directives into the operation, and a Utility Function is used to combine energy eficiency and performance requirements. A Decision Tree able to determine what to do in each scenario is deployed in a Software Defined Network environment. The proposed method was validated with diferent experiments, testing the Utility Function, checking the extra savings when combining several capabilities, the decision tree interpolation and dynamicity aspects. The orchestration proved to be valid to solve the problem of finding the best combination for a given scenario, achieving additional savings due to the combination, besides ensuring a conflict-free operation.
Resumo:
A condutividade hidráulica (K) é um dos parâmetros controladores da magnitude da velocidade da água subterrânea, e consequentemente, é um dos mais importantes parâmetros que afetam o fluxo subterrâneo e o transporte de solutos, sendo de suma importância o conhecimento da distribuição de K. Esse trabalho visa estimar valores de condutividade hidráulica em duas áreas distintas, uma no Sistema Aquífero Guarani (SAG) e outra no Sistema Aquífero Bauru (SAB) por meio de três técnicas geoestatísticas: krigagem ordinária, cokrigagem e simulação condicional por bandas rotativas. Para aumentar a base de dados de valores de K, há um tratamento estatístico dos dados conhecidos. O método de interpolação matemática (krigagem ordinária) e o estocástico (simulação condicional por bandas rotativas) são aplicados para estimar os valores de K diretamente, enquanto que os métodos de krigagem ordinária combinada com regressão linear e cokrigagem permitem incorporar valores de capacidade específica (Q/s) como variável secundária. Adicionalmente, a cada método geoestatístico foi aplicada a técnica de desagrupamento por célula para comparar a sua capacidade de melhorar a performance dos métodos, o que pode ser avaliado por meio da validação cruzada. Os resultados dessas abordagens geoestatísticas indicam que os métodos de simulação condicional por bandas rotativas com a técnica de desagrupamento e de krigagem ordinária combinada com regressão linear sem a técnica de desagrupamento são os mais adequados para as áreas do SAG (rho=0.55) e do SAB (rho=0.44), respectivamente. O tratamento estatístico e a técnica de desagrupamento usados nesse trabalho revelaram-se úteis ferramentas auxiliares para os métodos geoestatísticos.
Resumo:
Este trabalho apresenta uma nova metodologia para otimizar carteiras de ativos financeiros. A metodologia proposta, baseada em interpoladores universais tais quais as Redes Neurais Artificiais e a Krigagem, permite aproximar a superfície de risco e consequentemente a solução do problema de otimização associado a ela de forma generalizada e aplicável a qualquer medida de risco disponível na literatura. Além disto, a metodologia sugerida permite que sejam relaxadas hipóteses restritivas inerentes às metodologias existentes, simplificando o problema de otimização e permitindo que sejam estimados os erros na aproximação da superfície de risco. Ilustrativamente, aplica-se a metodologia proposta ao problema de composição de carteiras com a Variância (controle), o Valor-em-Risco (VaR) e o Valor-em-Risco Condicional (CVaR) como funções objetivo. Os resultados são comparados àqueles obtidos pelos modelos de Markowitz e Rockafellar, respectivamente.
Resumo:
Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.
Resumo:
This study aimed to evaluate the influence of the main meteorological mechanisms trainers and inhibitors of precipitation, and the interactions between different scales of operation, the spatial and temporal variability of the annual cycle of precipitation in the Rio Grande do Norte. Além disso, considerando as circunstâncias locais e regionais, criando assim uma base científica para apoiar ações futuras na gestão da demanda de água no Estado. Database from monthly precipitation of 45 years, ranging between 1963 and 2007, data provided by EMPARN. The methodology used to achieve the results was initially composed of descriptive statistical analysis of historical data to prove the stability of the series, were applied after, geostatistics tool for plotting maps of the variables, within the geostatistical we opted for by Kriging interpolation method because it was the method that showed the best results and minor errors. Among the results, we highlight the annual cycle of rainfall the State which is influenced by meteorological mechanisms of different spatial and temporal scales, where the main mechanisms cycle modulators are the Conference Intertropical Zone (ITCZ) acting since midFebruary to mid May throughout the state, waves Leste (OL), Lines of instability (LI), breeze systems and orographic rainfall acting mainly in the Coastal strip between February and July. Along with vortice of high levels (VCANs), Complex Mesoscale Convective (CCMs) and orographic rain in any region of the state mainly in spring and summer. In terms of larger scale phenomena stood out El Niño and La Niña, ENSO in the tropical Pacific basin. In La Niña episodes usually occur normal or rainy years, as upon the occurrence of prolonged periods of drought are influenced by EL NIÑO. In the Atlantic Ocean the standard Dipole also affects the intensity of the rainfall cycle in State. The cycle of rains in Rio Grande do Norte is divided into two periods, one comprising the regions West, Central and the Western Portion of the Wasteland Potiguar mesoregions of west Chapada Borborema, causing rains from midFebruary to mid-May and a second period of cycle, between February-July, where rains occur in mesoregions East and of the Wasteland, located upwind of the Chapada Borborema, both interspersed with dry periods without occurrence of significant rainfall and transition periods of rainy - dry and dry-rainy where isolated rainfall occur. Approximately 82% of the rainfall stations of the state which corresponds to 83.4% of the total area of Rio Grande do Norte, do not record annual volumes above 900 mm. Because the water supply of the State be maintained by small reservoirs already are in an advanced state of eutrophication, when the rains occur, act to wash and replace the water in the reservoirs, improving the quality of these, reducing the eutrophication process. When rain they do not significantly occur or after long periods of shortages, the process of eutrophication and deterioration of water in dams increased significantly. Through knowledge of the behavior of the annual cycle of rainfall can have an intimate knowledge of how it may be the tendency of rainy or prone to shortages following period, mainly observing the trends of larger scale phenomena