197 resultados para spatial error
Resumo:
We develop an optical system for generating multiple light sheets. This is enabled by employing a special class of spatial filters in a cylindrical lens geometry. The proposed binary filter placed at the back aperture of the cylindrical lens results in the generation of a periodic transverse pattern extending along the z axis (i.e., multiple light sheets). Experimental results confirm the generation of multiple light sheets of thickness 6.6 mu m with an intersheet spacing of 13.4 mu m. The proposed imaging technique may facilitate three-dimensional imaging in nano-optics, fluorescence microscopy, and nanobiology. (C) 2014 Optical Society of America
Resumo:
Rugged energy landscapes find wide applications in diverse fields ranging from astrophysics to protein folding. We study the dependence of diffusion coefficient (D) of a Brownian particle on the distribution width (epsilon) of randomness in a Gaussian random landscape by simulations and theoretical analysis. We first show that the elegant expression of Zwanzig Proc. Natl. Acad. Sci. U.S.A. 85, 2029 (1988)] for D(epsilon) can be reproduced exactly by using the Rosenfeld diffusion-entropy scaling relation. Our simulations show that Zwanzig's expression overestimates D in an uncorrelated Gaussian random lattice - differing by almost an order of magnitude at moderately high ruggedness. The disparity originates from the presence of ``three-site traps'' (TST) on the landscape - which are formed by the presence of deep minima flanked by high barriers on either side. Using mean first passage time formalism, we derive a general expression for the effective diffusion coefficient in the presence of TST, that quantitatively reproduces the simulation results and which reduces to Zwanzig's form only in the limit of infinite spatial correlation. We construct a continuous Gaussian field with inherent correlation to establish the effect of spatial correlation on random walk. The presence of TSTs at large ruggedness (epsilon >> k(B)T) gives rise to an apparent breakdown of ergodicity of the type often encountered in glassy liquids. (C) 2014 AIP Publishing LLC.
Resumo:
Although uncertainties in material properties have been addressed in the design of flexible pavements, most current modeling techniques assume that pavement layers are homogeneous. The paper addresses the influence of the spatial variability of the resilient moduli of pavement layers by evaluating the effect of the variance and correlation length on the pavement responses to loading. The integration of the spatially varying log-normal random field with the finite-difference method has been achieved through an exponential autocorrelation function. The variation in the correlation length was found to have a marginal effect on the mean values of the critical strains and a noticeable effect on the standard deviation which decreases with decreases in correlation length. This reduction in the variance arises because of the spatial averaging phenomenon over the softer and stiffer zones generated because of spatial variability. The increase in the mean value of critical strains with decreasing correlation length, although minor, illustrates that pavement performance is adversely affected by the presence of spatially varying layers. The study also confirmed that the higher the variability in the pavement layer moduli, introduced through a higher value of coefficient of variation (COV), the higher the variability in the pavement response. The study concludes that ignoring spatial variability by modeling the pavement layers as homogeneous that have very short correlation lengths can result in the underestimation of the critical strains and thus an inaccurate assessment of the pavement performance. (C) 2014 American Society of Civil Engineers.
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
We address the problem of designing an optimal pointwise shrinkage estimator in the transform domain, based on the minimum probability of error (MPE) criterion. We assume an additive model for the noise corrupting the clean signal. The proposed formulation is general in the sense that it can handle various noise distributions. We consider various noise distributions (Gaussian, Student's-t, and Laplacian) and compare the denoising performance of the estimator obtained with the mean-squared error (MSE)-based estimators. The MSE optimization is carried out using an unbiased estimator of the MSE, namely Stein's Unbiased Risk Estimate (SURE). Experimental results show that the MPE estimator outperforms the SURE estimator in terms of SNR of the denoised output, for low (0 -10 dB) and medium values (10 - 20 dB) of the input SNR.
Resumo:
In this article, we study the problem of determining an appropriate grading of meshes for a system of coupled singularly perturbed reaction-diffusion problems having diffusion parameters with different magnitudes. The central difference scheme is used to discretize the problem on adaptively generated mesh where the mesh equation is derived using an equidistribution principle. An a priori monitor function is obtained from the error estimate. A suitable a posteriori analogue of this monitor function is also derived for the mesh construction which will lead to an optimal second-order parameter uniform convergence. We present the results of numerical experiments for linear and semilinear reaction-diffusion systems to support the effectiveness of our preferred monitor function obtained from theoretical analysis. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.
Resumo:
Large-scale estimates of the area of terrestrial surface waters have greatly improved over time, in particular through the development of multi-satellite methodologies, but the generally coarse spatial resolution (tens of kms) of global observations is still inadequate for many ecological applications. The goal of this study is to introduce a new, globally applicable downscaling method and to demonstrate its applicability to derive fine resolution results from coarse global inundation estimates. The downscaling procedure predicts the location of surface water cover with an inundation probability map that was generated by bagged derision trees using globally available topographic and hydrographic information from the SRTM-derived HydroSHEDS database and trained on the wetland extent of the GLC2000 global land cover map. We applied the downscaling technique to the Global Inundation Extent from Multi-Satellites (GIEMS) dataset to produce a new high-resolution inundation map at a pixel size of 15 arc-seconds, termed GIEMS-D15. GIEMS-D15 represents three states of land surface inundation extents: mean annual minimum (total area, 6.5 x 10(6) km(2)), mean annual maximum (12.1 x 10(6) km(2)), and long-term maximum (173 x 10(6) km(2)); the latter depicts the largest surface water area of any global map to date. While the accuracy of GIEMS-D15 reflects distribution errors introduced by the downscaling process as well as errors from the original satellite estimates, overall accuracy is good yet spatially variable. A comparison against regional wetland cover maps generated by independent observations shows that the results adequately represent large floodplains and wetlands. GIEMS-D15 offers a higher resolution delineation of inundated areas than previously available for the assessment of global freshwater resources and the study of large floodplain and wetland ecosystems. The technique of applying inundation probabilities also allows for coupling with coarse-scale hydro-climatological model simulations. (C) 2014 Elsevier Inc All rights reserved.
Resumo:
We propose a laser interference technique for the fabrication of 3D nano-structures. This is possible with the introduction of specialized spatial filter in a 2 pi cylindrical lens system (consists of two opposing cylindrical lens sharing a common geometrical focus). The spatial filter at the back-aperture of a cylindrical lens gives rise to multiple light-sheet patterns. Two such interfering counter-propagating light-sheet pattern result in periodic 3D nano-pillar structure. This technique overcomes the existing slow point-by-point scanning, and has the ability to pattern selectively over a large volume. The proposed technique allows large-scale fabrication of periodic structures. Computational study shows a field-of-view (patterning volume) of approximately 12: 2mm(3) with the pillar-size of 80 nm and inter-pillar separation of 180 nm. Applications are in nano-waveguides, 3D nano-electronics, photonic crystals, and optical microscopy. (C) 2015 AIP Publishing LLC.
Resumo:
A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.
Resumo:
Gamma-band (25-140 Hz) oscillations are ubiquitous in mammalian forebrain structures involved in sensory processing, attention, learning and memory. The optic tectum (01) is the central structure in a midbrain network that participates critically in controlling spatial attention. In this review, we summarize recent advances in characterizing a neural circuit in this midbrain network that generates large amplitude, space-specific, gamma oscillations in the avian OT, both in vivo and in vitro. We describe key physiological and pharmacological mechanisms that produce and regulate the structure of these oscillations. The extensive similarities between midbrain gamma oscillations in birds and those in the neocortex and hippocampus of mammals, offer important insights into the functional significance of a midbrain gamma oscillatory code.
Resumo:
When a binary liquid is confined by a strongly repulsive wall, the local density is depleted near the wall and an interface similar to that between the liquid and its vapor is formed. This analogy suggests that the composition of the binary liquid near this interface should exhibit spatial modulation similar to that near a liquid-vapor interface even if the interactions of the wall with the two components of the liquid are the same. The Guggenheim adsorption relation quantifies the concentrations of two components of a binary mixture near a liquid-vapor interface and qualitatively states that the majority (minority) component enriches the interface for negative (positive) mixing energy if the surface tensions of the two components are not very different. From molecular dynamics simulations of binary mixtures with different compositions and interactions we find that the Guggenheim relation is qualitatively satisfied at wall-induced interfaces for systems with negative mixing energy at all state points considered. For systems with positive mixing energy, this relation is found to be qualitatively valid at low densities, while it is violated at state points with high density where correlations in the liquid are strong. This observation is validated by a calculation of the density profiles of the two components of the mixture using density functional theory with the Ramakrishnan-Yussouff free-energy functional. Possible reasons for the violation of the Guggenheim relation are discussed.
Resumo:
A residual based a posteriori error estimator is derived for a quadratic finite element method (FEM) for the elliptic obstacle problem. The error estimator involves various residuals consisting of the data of the problem, discrete solution and a Lagrange multiplier related to the obstacle constraint. The choice of the discrete Lagrange multiplier yields an error estimator that is comparable with the error estimator in the case of linear FEM. Further, an a priori error estimate is derived to show that the discrete Lagrange multiplier converges at the same rate as that of the discrete solution of the obstacle problem. The numerical experiments of adaptive FEM show optimal order convergence. This demonstrates that the quadratic FEM for obstacle problem exhibits optimal performance.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.