465 resultados para Lognormal kriging


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models. © 2013 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heat sinks are widely used for cooling electronic devices and systems. Their thermal performance is usually determined by the material, shape, and size of the heat sink. With the assistance of computational fluid dynamics (CFD) and surrogate-based optimization, heat sinks can be designed and optimized to achieve a high level of performance. In this paper, the design and optimization of a plate-fin-type heat sink cooled by impingement jet is presented. The flow and thermal fields are simulated using the CFD simulation; the thermal resistance of the heat sink is then estimated. A Kriging surrogate model is developed to approximate the objective function (thermal resistance) as a function of design variables. Surrogate-based optimization is implemented by adaptively adding infill points based on an integrated strategy of the minimum value, the maximum mean square error approach, and the expected improvement approaches. The results show the influence of design variables on the thermal resistance and give the optimal heat sink with lowest thermal resistance for given jet impingement conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Probability density function (pdf) for sum of n correlated lognormal variables is deducted as a special convolution integral. Pdf for weighted sums (where weights can be any real numbers) is also presented. The result for four dimensions was checked by Monte Carlo simulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The major objectives of this dissertation were to develop optimal spatial techniques to model the spatial-temporal changes of the lake sediments and their nutrients from 1988 to 2006, and evaluate the impacts of the hurricanes occurred during 1998–2006. Mud zone reduced about 10.5% from 1988 to 1998, and increased about 6.2% from 1998 to 2006. Mud areas, volumes and weight were calculated using validated Kriging models. From 1988 to 1998, mud thicknesses increased up to 26 cm in the central lake area. The mud area and volume decreased about 13.78% and 10.26%, respectively. From 1998 to 2006, mud depths declined by up to 41 cm in the central lake area, mud volume reduced about 27%. Mud weight increased up to 29.32% from 1988 to 1998, but reduced over 20% from 1998 to 2006. The reduction of mud sediments is likely due to re-suspension and redistribution by waves and currents produced by large storm events, particularly Hurricanes Frances and Jeanne in 2004 and Wilma in 2005. Regression, kriging, geographically weighted regression (GWR) and regression-kriging models have been calibrated and validated for the spatial analysis of the sediments TP and TN of the lake. GWR models provide the most accurate predictions for TP and TN based on model performance and error analysis. TP values declined from an average of 651 to 593 mg/kg from 1998 to 2006, especially in the lake’s western and southern regions. From 1988 to 1998, TP declined in the northern and southern areas, and increased in the central-western part of the lake. The TP weights increased about 37.99%–43.68% from 1988 to 1998 and decreased about 29.72%–34.42% from 1998 to 2006. From 1988 to 1998, TN decreased in most areas, especially in the northern and southern lake regions; western littoral zone had the biggest increase, up to 40,000 mg/kg. From 1998 to 2006, TN declined from an average of 9,363 to 8,926 mg/kg, especially in the central and southern regions. The biggest increases occurred in the northern lake and southern edge areas. TN weights increased about 15%–16.2% from 1988 to 1998, and decreased about 7%–11% from 1998 to 2006.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microarray platforms have been around for many years and while there is a rise of new technologies in laboratories, microarrays are still prevalent. When it comes to the analysis of microarray data to identify differentially expressed (DE) genes, many methods have been proposed and modified for improvement. However, the most popular methods such as Significance Analysis of Microarrays (SAM), samroc, fold change, and rank product are far from perfect. When it comes down to choosing which method is most powerful, it comes down to the characteristics of the sample and distribution of the gene expressions. The most practiced method is usually SAM or samroc but when the data tends to be skewed, the power of these methods decrease. With the concept that the median becomes a better measure of central tendency than the mean when the data is skewed, the tests statistics of the SAM and fold change methods are modified in this thesis. This study shows that the median modified fold change method improves the power for many cases when identifying DE genes if the data follows a lognormal distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aimed to evaluate the influence of the main meteorological mechanisms trainers and inhibitors of precipitation, and the interactions between different scales of operation, the spatial and temporal variability of the annual cycle of precipitation in the Rio Grande do Norte. Além disso, considerando as circunstâncias locais e regionais, criando assim uma base científica para apoiar ações futuras na gestão da demanda de água no Estado. Database from monthly precipitation of 45 years, ranging between 1963 and 2007, data provided by EMPARN. The methodology used to achieve the results was initially composed of descriptive statistical analysis of historical data to prove the stability of the series, were applied after, geostatistics tool for plotting maps of the variables, within the geostatistical we opted for by Kriging interpolation method because it was the method that showed the best results and minor errors. Among the results, we highlight the annual cycle of rainfall the State which is influenced by meteorological mechanisms of different spatial and temporal scales, where the main mechanisms cycle modulators are the Conference Intertropical Zone (ITCZ) acting since midFebruary to mid May throughout the state, waves Leste (OL), Lines of instability (LI), breeze systems and orographic rainfall acting mainly in the Coastal strip between February and July. Along with vortice of high levels (VCANs), Complex Mesoscale Convective (CCMs) and orographic rain in any region of the state mainly in spring and summer. In terms of larger scale phenomena stood out El Niño and La Niña, ENSO in the tropical Pacific basin. In La Niña episodes usually occur normal or rainy years, as upon the occurrence of prolonged periods of drought are influenced by EL NIÑO. In the Atlantic Ocean the standard Dipole also affects the intensity of the rainfall cycle in State. The cycle of rains in Rio Grande do Norte is divided into two periods, one comprising the regions West, Central and the Western Portion of the Wasteland Potiguar mesoregions of west Chapada Borborema, causing rains from midFebruary to mid-May and a second period of cycle, between February-July, where rains occur in mesoregions East and of the Wasteland, located upwind of the Chapada Borborema, both interspersed with dry periods without occurrence of significant rainfall and transition periods of rainy - dry and dry-rainy where isolated rainfall occur. Approximately 82% of the rainfall stations of the state which corresponds to 83.4% of the total area of Rio Grande do Norte, do not record annual volumes above 900 mm. Because the water supply of the State be maintained by small reservoirs already are in an advanced state of eutrophication, when the rains occur, act to wash and replace the water in the reservoirs, improving the quality of these, reducing the eutrophication process. When rain they do not significantly occur or after long periods of shortages, the process of eutrophication and deterioration of water in dams increased significantly. Through knowledge of the behavior of the annual cycle of rainfall can have an intimate knowledge of how it may be the tendency of rainy or prone to shortages following period, mainly observing the trends of larger scale phenomena

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The L-moments based index-flood procedure had been successfully applied for Regional Flood Frequency Analysis (RFFA) for the Island of Newfoundland in 2002 using data up to 1998. This thesis, however, considered both Labrador and the Island of Newfoundland using the L-Moments index-flood method with flood data up to 2013. For Labrador, the homogeneity test showed that Labrador can be treated as a single homogeneous region and the generalized extreme value (GEV) was found to be more robust than any other frequency distributions. The drainage area (DA) is the only significant variable for estimating the index-flood at ungauged sites in Labrador. In previous studies, the Island of Newfoundland has been considered as four homogeneous regions (A,B,C and D) as well as two Water Survey of Canada's Y and Z sub-regions. Homogeneous regions based on Y and Z was found to provide more accurate quantile estimates than those based on four homogeneous regions. Goodness-of-fit test results showed that the generalized extreme value (GEV) distribution is most suitable for the sub-regions; however, the three-parameter lognormal (LN3) gave a better performance in terms of robustness. The best fitting regional frequency distribution from 2002 has now been updated with the latest flood data, but quantile estimates with the new data were not very different from the previous study. Overall, in terms of quantile estimation, in both Labrador and the Island of Newfoundland, the index-flood procedure based on L-moments is highly recommended as it provided consistent and more accurate result than other techniques such as the regression on quantile technique that is currently used by the government.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study subdivides the Weddell Sea, Antarctica, into seafloor regions using multivariate statistical methods. These regions are categories used for comparing, contrasting and quantifying biogeochemical processes and biodiversity between ocean regions geographically but also regions under development within the scope of global change. The division obtained is characterized by the dominating components and interpreted in terms of ruling environmental conditions. The analysis uses 28 environmental variables for the sea surface, 25 variables for the seabed and 9 variables for the analysis between surface and bottom variables. The data were taken during the years 1983-2013. Some data were interpolated. The statistical errors of several interpolation methods (e.g. IDW, Indicator, Ordinary and Co-Kriging) with changing settings have been compared for the identification of the most reasonable method. The multivariate mathematical procedures used are regionalized classification via k means cluster analysis, canonical-correlation analysis and multidimensional scaling. Canonical-correlation analysis identifies the influencing factors in the different parts of the cove. Several methods for the identification of the optimum number of clusters have been tested. For the seabed 8 and 12 clusters were identified as reasonable numbers for clustering the Weddell Sea. For the sea surface the numbers 8 and 13 and for the top/bottom analysis 8 and 3 were identified, respectively. Additionally, the results of 20 clusters are presented for the three alternatives offering the first small scale environmental regionalization of the Weddell Sea. Especially the results of 12 clusters identify marine-influenced regions which can be clearly separated from those determined by the geological catchment area and the ones dominated by river discharge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy saving, reduction of greenhouse gasses and increased use of renewables are key policies to achieve the European 2020 targets. In particular, distributed renewable energy sources, integrated with spatial planning, require novel methods to optimise supply and demand. In contrast with large scale wind turbines, small and medium wind turbines (SMWTs) have a less extensive impact on the use of space and the power system, nevertheless, a significant spatial footprint is still present and the need for good spatial planning is a necessity. To optimise the location of SMWTs, detailed knowledge of the spatial distribution of the average wind speed is essential, hence, in this article, wind measurements and roughness maps were used to create a reliable annual mean wind speed map of Flanders at 10 m above the Earth’s surface. Via roughness transformation, the surface wind speed measurements were converted into meso- and macroscale wind data. The data were further processed by using seven different spatial interpolation methods in order to develop regional wind resource maps. Based on statistical analysis, it was found that the transformation into mesoscale wind, in combination with Simple Kriging, was the most adequate method to create reliable maps for decision-making on optimal production sites for SMWTs in Flanders (Belgium).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An RVE–based stochastic numerical model is used to calculate the permeability of randomly generated porous media at different values of the fiber volume fraction for the case of transverse flow in a unidirectional ply. Analysis of the numerical results shows that the permeability is not normally distributed. With the aim of proposing a new understanding on this particular topic, permeability data are fitted using both a mixture model and a unimodal distribution. Our findings suggest that permeability can be fitted well using a mixture model based on the lognormal and power law distributions. In case of a unimodal distribution, it is found, using the maximum-likelihood estimation method (MLE), that the generalized extreme value (GEV) distribution represents the best fit. Finally, an expression of the permeability as a function of the fiber volume fraction based on the GEV distribution is discussed in light of the previous results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Specimens from split Hopkinson pressure bar experiments, at strain rates between ~ 1000–9000 s− 1 at room temperature and 500 °C, have been studied using electron backscatter diffraction. No significant differences in the microstructures were observed at different strain rates, but were observed for different strains and temperatures. Size distribution for subgrains with boundary misorientations > 2° can be described as a bimodal lognormal area distribution. The distributions were found to change due to deformation. Part of the distribution describing the large subgrains decreased while the distribution for the small subgrains increased. This is in accordance with deformation being heterogeneous and successively spreading into the undeformed part of individual grains. The variation of the average size for the small subgrain distribution varies with strain but not with strain rate in the tested interval. The mean free distance for dislocation slip, interpreted here as the average size of the distribution of small subgrains, displays a variation with plastic strain which is in accordance with the different stages in the stress-strain curves. The rate of deformation hardening in the linear hardening range is accurately calculated using the variation of the small subgrain size with strain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intertidal flats of the estuarine macro-intertidal Baie des Veys (France) were investigated to identify spatial features of sediment and microphytobenthos (MPB) in April 2003. Gradients occurred within the domain, and patches were identified close to vegetated areas or within the oyster-farming areas where calm physical conditions and biodeposition altered the sediment and MPB landscapes. Spatial patterns of chl a content were explained primarily by the influence of sediment features, while bed elevation and compaction brought only minor insights into MPB distribution regulation. The smaller size of MPB patches compared to silt patches revealed the interplay between physical structure defining the sediment landscape, the biotic patches that they contain, and that median grain-size is the most important parameter in explaining the spatial pattern of MPB. Small-scale temporal dynamics of sediment chl a content and grain-size distribution were surveyed in parallel during 2 periods of 14 d to detect tidal and seasonal variations. Our results showed a weak relationship between mud fraction and MPB biomass in March, and this relationship fully disappeared in July. Tidal exposure was the most important parameter in explaining the summer temporal dynamics of MPB. This study reveals the general importance of bed elevation and tidal exposure in muddy habitats and that silt content was a prime governing physical factor in winter. Biostabilisation processes seemed to behave only as secondary factors that could only amplify the initial silt accumulation in summer rather than primary factors explaining spatial or long-term trends of sediment changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Under contact metamorphic conditions, carbonate rocks in the direct vicinity of the Adamello pluton reflect a temperature-induced grain coarsening. Despite this large-scale trend, a considerable grain size scatter occurs on the outcrop-scale indicating local influence of second-order effects such as thermal perturbations, fluid flow and second-phase particles. Second-phase particles, whose sizes range from nano- to the micron-scale, induce the most pronounced data scatter resulting in grain sizes too small by up to a factor of 10, compared with theoretical grain growth in a pure system. Such values are restricted to relatively impure samples consisting of up to 10 vol.% micron-scale second-phase particles, or to samples containing a large number of nano-scale particles. The obtained data set suggests that the second phases induce a temperature-controlled reduction on calcite grain growth. The mean calcite grain size can therefore be expressed in the form D 1⁄4 C2 eQ*/RT(dp/fp)m*, where C2 is a constant, Q* is an activation energy, T the temperature and m* the exponent of the ratio dp/fp, i.e. of the average size of the second phases divided by their volume fraction. However, more data are needed to obtain reliable values for C2 and Q*. Besides variations in the average grain size, the presence of second-phase particles generates crystal size distribution (CSD) shapes characterized by lognormal distributions, which differ from the Gaussian-type distributions of the pure samples. In contrast, fluid-enhanced grain growth does not change the shape of the CSDs, but due to enhanced transport properties, the average grain sizes increase by a factor of 2 and the variance of the distribution increases. Stable d18O and d13C isotope ratios in fluid-affected zones only deviate slightly from the host rock values, suggesting low fluid/rock ratios. Grain growth modelling indicates that the fluid-induced grain size variations can develop within several ka. As inferred from a combination of thermal and grain growth modelling, dykes with widths of up to 1 m have only a restricted influence on grain size deviations smaller than a factor of 1.1.To summarize, considerable grain size variations of up to one order of magnitude can locally result from second-order effects. Such effects require special attention when comparing experimentally derived grain growth kinetics with field studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The long-term adverse effects on health associated with air pollution exposure can be estimated using either cohort or spatio-temporal ecological designs. In a cohort study, the health status of a cohort of people are assessed periodically over a number of years, and then related to estimated ambient pollution concentrations in the cities in which they live. However, such cohort studies are expensive and time consuming to implement, due to the long-term follow up required for the cohort. Therefore, spatio-temporal ecological studies are also being used to estimate the long-term health effects of air pollution as they are easy to implement due to the routine availability of the required data. Spatio-temporal ecological studies estimate the health impact of air pollution by utilising geographical and temporal contrasts in air pollution and disease risk across $n$ contiguous small-areas, such as census tracts or electoral wards, for multiple time periods. The disease data are counts of the numbers of disease cases occurring in each areal unit and time period, and thus Poisson log-linear models are typically used for the analysis. The linear predictor includes pollutant concentrations and known confounders such as socio-economic deprivation. However, as the disease data typically contain residual spatial or spatio-temporal autocorrelation after the covariate effects have been accounted for, these known covariates are augmented by a set of random effects. One key problem in these studies is estimating spatially representative pollution concentrations in each areal which are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over modelled concentrations (grid level) from an atmospheric dispersion model. The aim of this thesis is to investigate the health effects of long-term exposure to Nitrogen Dioxide (NO2) and Particular matter (PM10) in mainland Scotland, UK. In order to have an initial impression about the air pollution health effects in mainland Scotland, chapter 3 presents a standard epidemiological study using a benchmark method. The remaining main chapters (4, 5, 6) cover the main methodological focus in this thesis which has been threefold: (i) how to better estimate pollution by developing a multivariate spatio-temporal fusion model that relates monitored and modelled pollution data over space, time and pollutant; (ii) how to simultaneously estimate the joint effects of multiple pollutants; and (iii) how to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. Specifically, chapters 4 and 5 are developed to achieve (i), while chapter 6 focuses on (ii) and (iii). In chapter 4, I propose an integrated model for estimating the long-term health effects of NO2, that fuses modelled and measured pollution data to provide improved predictions of areal level pollution concentrations and hence health effects. The air pollution fusion model proposed is a Bayesian space-time linear regression model for relating the measured concentrations to the modelled concentrations for a single pollutant, whilst allowing for additional covariate information such as site type (e.g. roadside, rural, etc) and temperature. However, it is known that some pollutants might be correlated because they may be generated by common processes or be driven by similar factors such as meteorology. The correlation between pollutants can help to predict one pollutant by borrowing strength from the others. Therefore, in chapter 5, I propose a multi-pollutant model which is a multivariate spatio-temporal fusion model that extends the single pollutant model in chapter 4, which relates monitored and modelled pollution data over space, time and pollutant to predict pollution across mainland Scotland. Considering that we are exposed to multiple pollutants simultaneously because the air we breathe contains a complex mixture of particle and gas phase pollutants, the health effects of exposure to multiple pollutants have been investigated in chapter 6. Therefore, this is a natural extension to the single pollutant health effects in chapter 4. Given NO2 and PM10 are highly correlated (multicollinearity issue) in my data, I first propose a temporally-varying linear model to regress one pollutant (e.g. NO2) against another (e.g. PM10) and then use the residuals in the disease model as well as PM10, thus investigating the health effects of exposure to both pollutants simultaneously. Another issue considered in chapter 6 is to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. There are in total four approaches being developed to adjust the exposure uncertainty. Finally, chapter 7 summarises the work contained within this thesis and discusses the implications for future research.