340 resultados para geostatistical
Resumo:
The Podzols of the world are divided into intra-zonal and zonal according to then location. Zonal Podzols are typical for boreal and taiga zone associated to climate conditions. Intra-zonal podzols are not necessarily limited by climate and are typical for mineral poor substrates. The Intra-zonal Podzols of the Brazilian Amazon cover important surfaces of the upper Amazon basin. Their formation is attributed to perched groundwater associated to organic matter and metals accumulations in reducing/acidic environments. Podzols have a great capacity of storing important amounts of soil organic carbon in deep thick spodic horizons (Bh), in soil depths ranging from 1.5 to 5m. Previous research concerning the soil carbon stock in Amazon soils have not taken into account the deep carbon stock (below 1 m soil depth) of Podzols. Given this, the main goal of this research was to quantify and to map the soil organic carbon stock in the region of Rio Negro basin, considering the carbon stored in the first soil meter as well as the carbon stored in deep soil horizons up to 3m. The amount of soil organic carbon stored in soils of Rio Negro basin was evaluated in different map scales, from local surveys, to the scale of the basin. High spatial and spectral resolution remote sensing images were necessary in order to map the soil types of the studied areas and to estimate the soil carbon stock in local and regional scale. Therefore, a multi-sensor analysis was applied with the aim of generating a series of biophysical attributes that can be indirectly related to lateral variation of soil types. The soil organic carbon stock was also estimated for the area of the Brazilian portion of the Rio Negro basin, based on geostatistical analysis (multiple regression kriging), remote sensing images and legacy data. We observed that Podzols store an average carbon stock of 18 kg C m-2 on the first soil meter. Similar amount was observed in adjacent soils (mainly Ferralsols and Acrisols) with an average carbon stock of 15 kg C m-2. However if we take into account a 3 m soil depth, the amount of carbon stored in Podzols is significantly higher with values ranging from 55 kg C m-2 to 82 kg C m-2, which is higher than the one stored in adjacent soils (18 kg C m-2 to 25 kg C m-2). Given this, the amount of carbon stored in deep soil horizons of Podzols should be considered as an important carbon reservoir, face a scenario of global climate change
Identificação de padrões de uso do solo urbano em São Paulo/SP utilizando parâmetros de variogramas.
Resumo:
As imagens de alta resolução espacial impulsionaram os estudos de Sensoriamento Remoto em ambientes urbanos, já que elas permitem uma melhor distinção dos elementos que compõem esse ambiente tão heterogêneo. Técnicas de Geoestatística são cada vez mais utilizadas em estudos de Sensoriamento Remoto e o variograma é uma importante ferramenta de análise geoestatística, pois permitem entender o comportamento espacial de uma variável regionalizada, neste caso, os níveis de cinza de uma imagem de satélite. O presente trabalho pretende avaliar a proposta metodológica que consiste em identificar padrões residenciais urbanos de três classes de uso e ocupação do solo por meio da análise dos valores apresentados pelos parâmetros, alcance, patamar e efeito pepita de um variograma. A hipótese é que os valores correspondentes a esses parâmetros representem o comportamento espectral padrão de cada classe e, portando, indicam que há um padrão na organização espacial de cada uma das classes. Para a presente pesquisa foram utilizadas imagem IKONOS 2002, e a classificação de uso e ocupação do solo da sub-bacia do córrego Bananal na bacia do Rio Cabuçu de Baixo em São Paulo SP. Amostras das imagens de cada classe foram extraídas e os valores de nível de cinza em cada pixel foram utilizados para calcular os variogramas. Após análise dos resultados obtidos, apenas o parâmetro alcance foi levado em consideração, pois é através desse parâmetro que se observa o grau de homogeneidade de cada amostra. Os valores de alcance obtidos nos cálculos dos variogramas identificaram com melhor precisão a classe Conjuntos Residenciais que é uma classe com padrões e características singulares, já a identificação das classes Ocupação Densa Regular e Ocupação Densa Irregular não obteve uma precisão boa, sendo que essas classes são similares em diversos aspectos.
Resumo:
A new methodology is proposed to produce subsidence activity maps based on the geostatistical analysis of persistent scatterer interferometry (PSI) data. PSI displacement measurements are interpolated based on conditional Sequential Gaussian Simulation (SGS) to calculate multiple equiprobable realizations of subsidence. The result from this process is a series of interpolated subsidence values, with an estimation of the spatial variability and a confidence level on the interpolation. These maps complement the PSI displacement map, improving the identification of wide subsiding areas at a regional scale. At a local scale, they can be used to identify buildings susceptible to suffer subsidence related damages. In order to do so, it is necessary to calculate the maximum differential settlement and the maximum angular distortion for each building of the study area. Based on PSI-derived parameters those buildings in which the serviceability limit state has been exceeded, and where in situ forensic analysis should be made, can be automatically identified. This methodology has been tested in the city of Orihuela (SE Spain) for the study of historical buildings damaged during the last two decades by subsidence due to aquifer overexploitation. The qualitative evaluation of the results from the methodology carried out in buildings where damages have been reported shows a success rate of 100%.
Resumo:
Senior thesis written for Oceanography 445
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
La prima parte di questo lavoro di tesi tratta dell’interazione tra un bacino di laminazione e il sottostante acquifero: è in fase di progetto, infatti, la costruzione di una cassa di espansione sul torrente Baganza, a monte della città di Parma. L’obiettivo di tale intervento è di ridurre il rischio di esondazione immagazzinando temporaneamente, in un serbatoio artificiale, la parte più pericolosa del volume di piena che verrebbe rilasciata successivamente con portate che possono essere agevolmente contenute nel tratto cittadino del torrente. L’acquifero è stato preliminarmente indagato e monitorato permettendone la caratterizzazione litostratigrafica. La stratigrafia si può riassumere in una sequenza di strati ghiaioso-sabbiosi con successione di lenti d’argilla più o meno spesse e continue, distinguendo due acquiferi differenti (uno freatico ed uno confinato). Nel presente studio si fa riferimento al solo acquifero superficiale che è stato modellato numericamente, alle differenze finite, per mezzo del software MODFLOW_2005. L'obiettivo del presente lavoro è di rappresentare il sistema acquifero nelle condizioni attuali (in assenza di alcuna opera) e di progetto. La calibrazione è stata condotta in condizioni stazionarie utilizzando i livelli piezometrici raccolti nei punti d’osservazione durante la primavera del 2013. I valori di conducibilità idraulica sono stati stimati per mezzo di un approccio geostatistico Bayesiano. Il codice utilizzato per la stima è il bgaPEST, un software gratuito per la soluzione di problemi inversi fortemente parametrizzati, sviluppato sulla base dei protocolli del software PEST. La metodologia inversa stima il campo di conducibilità idraulica combinando osservazioni sullo stato del sistema (livelli piezometrici nel caso in esame) e informazioni a-priori sulla struttura dei parametri incogniti. La procedura inversa richiede il calcolo della sensitività di ciascuna osservazione a ciascuno dei parametri stimati; questa è stata valutata in maniera efficiente facendo ricorso ad una formulazione agli stati aggiunti del codice in avanti MODFLOW_2005_Adjoint. I risultati della metodologia sono coerenti con la natura alluvionale dell'acquifero indagato e con le informazioni raccolte nei punti di osservazione. Il modello calibrato può quindi essere utilizzato come supporto alla progettazione e gestione dell’opera di laminazione. La seconda parte di questa tesi tratta l'analisi delle sollecitazioni indotte dai percorsi di flusso preferenziali causati da fenomeni di piping all’interno dei rilevati arginali. Tali percorsi preferenziali possono essere dovuti alla presenza di gallerie scavate da animali selvatici. Questo studio è stato ispirato dal crollo del rilevato arginale del Fiume Secchia (Modena), che si è verificato in gennaio 2014 a seguito di un evento alluvionale, durante il quale il livello dell'acqua non ha mai raggiunto la sommità arginale. La commissione scientifica, la cui relazione finale fornisce i dati utilizzati per questo studio, ha attribuito, con molta probabilità, il crollo del rilevato alla presenza di tane di animali. Con lo scopo di analizzare il comportamento del rilevato in condizioni integre e in condizioni modificate dall'esistenza di un tunnel che attraversa il manufatto arginale, è stato realizzato un modello numerico 3D dell’argine mediante i noti software Femwater e Feflow. I modelli descrivono le infiltrazioni all'interno del rilevato considerando il terreno in entrambe le porzioni sature ed insature, adottando la tecnica agli elementi finiti. La tana è stata rappresentata da elementi con elevata permeabilità e porosità, i cui valori sono stati modificati al fine di valutare le diverse influenze sui flussi e sui contenuti idrici. Per valutare se le situazioni analizzate presentino o meno il verificarsi del fenomeno di erosione, sono stati calcolati i valori del fattore di sicurezza. Questo è stato valutato in differenti modi, tra cui quello recentemente proposto da Richards e Reddy (2014), che si riferisce al criterio di energia cinetica critica. In ultima analisi è stato utilizzato il modello di Bonelli (2007) per calcolare il tempo di erosione ed il tempo rimanente al collasso del rilevato.
Resumo:
Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Salinity, water temperature, and chlorophyll a (chl-a) biomass were used as performance measures in the period 1999–2001 to evaluate the effect of a hydrological rehabilitation project in the Ciénaga Grande de Santa Marta (CGSM)–Pajarales lagoon complex, Colombia where freshwater diversions were initiated in 1995 and completed in 1998. The objective of this study was to evaluate how diversions of freshwater into previously hypersaline (>80) environments changed the spatial and temporal distribution of environmental characteristics. Following the diversion, 19 surveys and transects using a flow-through system were surveyed in the CGSM–Pajarales complex to continuously measure selected water quality parameters. Geostatistical analysis indicates that hydrology and salinity regimes and water circulation patterns in the CGSM lagoon are largely controlled by freshwater discharge from the Fundacion, Aracataca, and Sevilla Rivers. Residence times in the CGSM lagoon were similar before (15.5 ± 3.8 days) and after (14.2 ± 2.0 days) the rehabilitation project and indicated that the system is flushed regularly. In contrast, chl-a biomass was highly variable in the CGSM–Pajarales lagoon complex and not related to discharge patterns. Mean annual chl-a biomass (44–250 μg L−1) following the diversion project was similar to values recorded since the 1980s and still remains among the highest reported in coastal systems around the world owing to its unique hydrology regulated by the Magdalena River and Sierra Nevada de Santa Marta watersheds and the high teleconnection to the El Niño Southern Oscillation (ENSO). Our results confirm that the reduction in salinity in the CGSM lagoon and Pajarales complex during 1999–2000 was largely driven by high precipitation (2500 mm) induced by the ENSO–La Niña rather than by the freshwater diversions.
Resumo:
Approaches to quantify the organic carbon accumulation on a global scale generally do not consider the small-scale variability of sedimentary and oceanographic boundary conditions along continental margins. In this study, we present a new approach to regionalize the total organic carbon (TOC) content in surface sediments (<5 cm sediment depth). It is based on a compilation of more than 5500 single measurements from various sources. Global TOC distribution was determined by the application of a combined qualitative and quantitative-geostatistical method. Overall, 33 benthic TOC-based provinces were defined and used to process the global distribution pattern of the TOC content in surface sediments in a 1°x1° grid resolution. Regional dependencies of data points within each single province are expressed by modeled semi-variograms. Measured and estimated TOC values show good correlation, emphasizing the reasonable applicability of the method. The accumulation of organic carbon in marine surface sediments is a key parameter in the control of mineralization processes and the material exchange between the sediment and the ocean water. Our approach will help to improve global budgets of nutrient and carbon cycles.
Resumo:
This study aimed to evaluate the influence of the main meteorological mechanisms trainers and inhibitors of precipitation, and the interactions between different scales of operation, the spatial and temporal variability of the annual cycle of precipitation in the Rio Grande do Norte. Além disso, considerando as circunstâncias locais e regionais, criando assim uma base científica para apoiar ações futuras na gestão da demanda de água no Estado. Database from monthly precipitation of 45 years, ranging between 1963 and 2007, data provided by EMPARN. The methodology used to achieve the results was initially composed of descriptive statistical analysis of historical data to prove the stability of the series, were applied after, geostatistics tool for plotting maps of the variables, within the geostatistical we opted for by Kriging interpolation method because it was the method that showed the best results and minor errors. Among the results, we highlight the annual cycle of rainfall the State which is influenced by meteorological mechanisms of different spatial and temporal scales, where the main mechanisms cycle modulators are the Conference Intertropical Zone (ITCZ) acting since midFebruary to mid May throughout the state, waves Leste (OL), Lines of instability (LI), breeze systems and orographic rainfall acting mainly in the Coastal strip between February and July. Along with vortice of high levels (VCANs), Complex Mesoscale Convective (CCMs) and orographic rain in any region of the state mainly in spring and summer. In terms of larger scale phenomena stood out El Niño and La Niña, ENSO in the tropical Pacific basin. In La Niña episodes usually occur normal or rainy years, as upon the occurrence of prolonged periods of drought are influenced by EL NIÑO. In the Atlantic Ocean the standard Dipole also affects the intensity of the rainfall cycle in State. The cycle of rains in Rio Grande do Norte is divided into two periods, one comprising the regions West, Central and the Western Portion of the Wasteland Potiguar mesoregions of west Chapada Borborema, causing rains from midFebruary to mid-May and a second period of cycle, between February-July, where rains occur in mesoregions East and of the Wasteland, located upwind of the Chapada Borborema, both interspersed with dry periods without occurrence of significant rainfall and transition periods of rainy - dry and dry-rainy where isolated rainfall occur. Approximately 82% of the rainfall stations of the state which corresponds to 83.4% of the total area of Rio Grande do Norte, do not record annual volumes above 900 mm. Because the water supply of the State be maintained by small reservoirs already are in an advanced state of eutrophication, when the rains occur, act to wash and replace the water in the reservoirs, improving the quality of these, reducing the eutrophication process. When rain they do not significantly occur or after long periods of shortages, the process of eutrophication and deterioration of water in dams increased significantly. Through knowledge of the behavior of the annual cycle of rainfall can have an intimate knowledge of how it may be the tendency of rainy or prone to shortages following period, mainly observing the trends of larger scale phenomena