960 resultados para Kriging interpolation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The major objectives of this dissertation were to develop optimal spatial techniques to model the spatial-temporal changes of the lake sediments and their nutrients from 1988 to 2006, and evaluate the impacts of the hurricanes occurred during 1998–2006. Mud zone reduced about 10.5% from 1988 to 1998, and increased about 6.2% from 1998 to 2006. Mud areas, volumes and weight were calculated using validated Kriging models. From 1988 to 1998, mud thicknesses increased up to 26 cm in the central lake area. The mud area and volume decreased about 13.78% and 10.26%, respectively. From 1998 to 2006, mud depths declined by up to 41 cm in the central lake area, mud volume reduced about 27%. Mud weight increased up to 29.32% from 1988 to 1998, but reduced over 20% from 1998 to 2006. The reduction of mud sediments is likely due to re-suspension and redistribution by waves and currents produced by large storm events, particularly Hurricanes Frances and Jeanne in 2004 and Wilma in 2005. Regression, kriging, geographically weighted regression (GWR) and regression-kriging models have been calibrated and validated for the spatial analysis of the sediments TP and TN of the lake. GWR models provide the most accurate predictions for TP and TN based on model performance and error analysis. TP values declined from an average of 651 to 593 mg/kg from 1998 to 2006, especially in the lake’s western and southern regions. From 1988 to 1998, TP declined in the northern and southern areas, and increased in the central-western part of the lake. The TP weights increased about 37.99%–43.68% from 1988 to 1998 and decreased about 29.72%–34.42% from 1998 to 2006. From 1988 to 1998, TN decreased in most areas, especially in the northern and southern lake regions; western littoral zone had the biggest increase, up to 40,000 mg/kg. From 1998 to 2006, TN declined from an average of 9,363 to 8,926 mg/kg, especially in the central and southern regions. The biggest increases occurred in the northern lake and southern edge areas. TN weights increased about 15%–16.2% from 1988 to 1998, and decreased about 7%–11% from 1998 to 2006.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A combination of statistical and interpolation methods and Geographic Information System (GIS) spatial analysis was used to evaluate the spatial and temporal changes in groundwater Cl− concentrations in Collier and Lee Counties (southwestern Florida), and Miami-Dade and Broward Counties (southeastern Florida), since 1985. In southwestern Florida, the average Cl− concentrations in the shallow wells (0–43 m) in Collier and Lee Counties increased from 132 mg L−1 in 1985 to 230 mg L−1 in 2000. The average Cl− concentrations in the deep wells (>43 m) of southwestern Florida increased from 392 mg L−1 in 1985 to 447 mg L−1 in 2000. Results also indicated a positive correlation between the mean sea level and Cl− concentrations and between the mean sea level and groundwater levels for the shallow wells. Concentrations in the Biscayne Aquifer (southeastern Florida) were significantly higher than those of southwestern Florida. The average Cl− concentrations increased from 159 mg L−1 in 1985 to 470 mg L−1 in 2010 for the shallow wells (<33 m) and from 1360 mg L−1 in 1985 to 2050 mg L−1 in 2010 for the deep wells (>33 m). In the Biscayne Aquifer, wells showed a positive or negative correlation between mean sea level and Cl− concentrations according to their location with respect to the saltwater intrusion line. Wells located inland behind canal control structures and west of the saltwater intrusion line showed negative correlation values, whereas wells located east of the saltwater intrusion line showed positive values. Overall, the results indicated that since 1985, there was a potential decline in the available freshwater resources estimated at about 12–17% of the available drinking-quality groundwater of the southeastern study area located in the Biscayne Aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research was to investigate the influence of elevation and other terrain characteristics over the spatial and temporal distribution of rainfall. A comparative analysis was conducted between several methods of spatial interpolations using mean monthly precipitation values in order to select the best. Following those previous results it was possible to fit an Artificial Neural Network model for interpolation of monthly precipitation values for a period of 20 years, with input values such as longitude, latitude, elevation, four geomorphologic characteristics and anchored by seven weather stations, it reached a high correlation coefficient (r=0.85). This research demonstrated a strong influence of elevation and other geomorphologic variables over the spatial distribution of precipitation and the agreement that there are nonlinear relationships. This model will be used to fill gaps in time-series of monthly precipitation, and to generate maps of spatial distribution of monthly precipitation at a resolution of 1km2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the oil prospection research seismic data are usually irregular and sparsely sampled along the spatial coordinates due to obstacles in placement of geophones. Fourier methods provide a way to make the regularization of seismic data which are efficient if the input data is sampled on a regular grid. However, when these methods are applied to a set of irregularly sampled data, the orthogonality among the Fourier components is broken and the energy of a Fourier component may "leak" to other components, a phenomenon called "spectral leakage". The objective of this research is to study the spectral representation of irregularly sampled data method. In particular, it will be presented the basic structure of representation of the NDFT (nonuniform discrete Fourier transform), study their properties and demonstrate its potential in the processing of the seismic signal. In this way we study the FFT (fast Fourier transform) and the NFFT (nonuniform fast Fourier transform) which rapidly calculate the DFT (discrete Fourier transform) and NDFT. We compare the recovery of the signal using the FFT, DFT and NFFT. We approach the interpolation of seismic trace using the ALFT (antileakage Fourier transform) to overcome the problem of spectral leakage caused by uneven sampling. Applications to synthetic and real data showed that ALFT method works well on complex geology seismic data and suffers little with irregular spatial sampling of the data and edge effects, in addition it is robust and stable with noisy data. However, it is not as efficient as the FFT and its reconstruction is not as good in the case of irregular filling with large holes in the acquisition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study we present a global distribution pattern and budget of the minimum flux of particulate organic carbon to the sea floor (J POC alpha). The estimations are based on regionally specific correlations between the diffusive oxygen flux across the sediment-water interface, the total organic carbon content in surface sediments, and the oxygen concentration in bottom waters. For this, we modified the principal equation of Cai and Reimers [1995] as a basic monod reaction rate, applied within 11 regions where in situ measurements of diffusive oxygen uptake exist. By application of the resulting transfer functions to other regions with similar sedimentary conditions and areal interpolation, we calculated a minimum global budget of particulate organic carbon that actually reaches the sea floor of ~0.5 GtC yr**-1 (>1000 m water depth (wd)), whereas approximately 0.002-0.12 GtC yr**-1 is buried in the sediments (0.01-0.4% of surface primary production). Despite the fact that our global budget is in good agreement with previous studies, we found conspicuous differences among the distribution patterns of primary production, calculations based on particle trap collections of the POC flux, and J POC alpha of this study. These deviations, especially located at the southeastern and southwestern Atlantic Ocean, the Greenland and Norwegian Sea and the entire equatorial Pacific Ocean, strongly indicate a considerable influence of lateral particle transport on the vertical link between surface waters and underlying sediments. This observation is supported by sediment trap data. Furthermore, local differences in the availability and quality of the organic matter as well as different transport mechanisms through the water column are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a sediment core AMK4-316 (460 cm long) on the basis of radiocarbon, oxygen isotope, and lithological data climatostratigraphy is established for time interval about 145 ka. The method of factor analysis and spline interpolation applied to data on distribution of planktic foraminifera species has allowed to reconstruct average annual and seasonal temperatures and salinity at the surface and at depth 100 m. The optimum of the Last Interglaciation (5e) is characterized by maximal temperatures, low amplitudes of seasonal fluctuations, and by increased thickness of the upper homogeneous layer. The glacial hydrological mode has arisen here 115 ka ago. Coolings outstripped appropriate events of the global continental glaciation. Minimal average annual temperatures (4-4.5°C) are reconstructed for 47-45, 42, 36, 29-30, and 10 ka. For 50-30 ka interval numerous strong temperature fluctuations that reflect migrations of the polar front are established. Maximal differences of salinity at the surface and depth 100 m showing influence of melting waters were in the beginning of deglaciations (135 and 20 ka) and repeatedly arose in 50-30 ka interval. The Last Glacial Maximum (18 ka) is characterized by the lowest salinity but not by a peak of low temperatures at the surface. Surface temperature was lowered up to 10 ka. Average annual surface temperature of the Holocene optimum was 2°C above the modern one and 2°C below temperature in the Interglaciation optimum (5e), thickness of the upper homogeneous layer exceeded 100 m.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Antarctic Pack Ice Seal (APIS) Program was initiated in 1994 to estimate the abundance of four species of Antarctic phocids: the crabeater seal Lobodon carcinophaga, Weddell seal Leptonychotes weddellii, Ross seal Ommatophoca rossii and leopard seal Hydrurga leptonyx and to identify ecological relationships and habitat use patterns. The Atlantic sector of the Southern Ocean (the eastern sector of the Weddell Sea) was surveyed by research teams from Germany, Norway and South Africa using a range of aerial methods over five austral summers between 1996-1997 and 2000-2001. We used these observations to model densities of seals in the area, taking into account haul-out probabilities, survey-specific sighting probabilities and covariates derived from satellite-based ice concentrations and bathymetry. These models predicted the total abundance over the area bounded by the surveys (30°W and 10°E). In this sector of the coast, we estimated seal abundances of: 514 (95 % CI 337-886) x 10**3 crabeater seals, 60.0 (43.2-94.4) x 10**3 Weddell seals and 13.2 (5.50-39.7) x 10**3 leopard seals. The crabeater seal densities, approximately 14,000 seals per degree longitude, are similar to estimates obtained by surveys in the Pacific and Indian sectors by other APIS researchers. Very few Ross seals were observed (24 total), leading to a conservative estimate of 830 (119-2894) individuals over the study area. These results provide an important baseline against which to compare future changes in seal distribution and abundance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A first-principles method is applied to find the intra and intervalley n-type carrier scattering rates for substitutional carbon in silicon. The method builds on a previously developed first-principles approach with the introduction of an interpolation technique to determine the intravalley scattering rates. Intravalley scattering is found to be the dominant alloy scattering process in Si1-xCx, followed by g-type intervalley scattering. Mobility calculations show that alloy scattering due to substitutional C alone cannot account for the experimentally observed degradation of the mobility. We show that the incorporation of additional charged impurity scattering due to electrically active interstitial C complexes models this residual resistivity well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.

Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is the first comprehensive and synthetic study of the Irish presentation and legends of Longinus. Longinus was the soldier at the crucifixion who pierced Christ with a spear, who believed and, according to some texts, was healed of his blindness by the blood and water issuing from the wound, and who later was martyred for his belief. In my thesis I survey the knowledge and use of the legend of Longinus in Ireland over genres and over time. Sources used for the analyses include iconographic representations of the spear-bearer in manuscripts, metalwork and stone and textual representations of the figure of Longinus ranging over the history of Irish literature from the early medieval to the early modern period, as well as over Irish and HibernoLatin texts. The thesis consists of four core chapters, the analyses of the presentations of Longinus in early-medieval Irish texts and in the iconographic tradition (I,II), the editions of the extant Irish and the earliest surviving Latin texts of the Passion of Longinus and of a little-known short tract describing the healing of Longinus from Leabhar Breac (III), and the discussion of the later medieval Irish popular traditions (IV). Particular attention is given to the study of two intriguing peculiarities of the Irish tradition. Most early Irish Gospel books feature an interpolation of the episode of the spear-thrust in Matthew 27:49, directly preceding the death of Christ, implying its reading as the immediate cause of death. The image of Longinus as 'iugulator Christi' ('killer of Christ') appears to have been crucial for the development of the legend. Also, the blindness motif, which rarely features in other European popular traditions until the twelfth century, is attested as early as the eighth century in Ireland, which has led some scholars to suggest a potential Irish origin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.