950 resultados para Gaussian random fields


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present in this paper, approximate analytical expressions for the intensity of light scattered by a rough surface, whose elevation. xi(x,y) in the z-direction is a zero mean stationary Gaussian random variable. With (x,y) and (x',y') being two points on the surface, we have h. = 0 with a correlation, = sigma(2)g(r), where r = (x - x')(2) + ( y - y')(2)](1/2) is the distance between these two points. We consider g(r) = exp-r/l)(beta)] with 1 <= beta <= 2, showing that g(0) = 1 and g(r) -> 0 for r >> l. The intensity expression is sought to be expressed as f(v(xy)) = {1 + (c/2y)v(x)(2) + v(y)(2)]}(-y), where v(x) and v(y) are the wave vectors of scattering, as defined by the Beckmann notation. In the paper, we present expressions for c and y, in terms of sigma, l, and beta. The closed form expressions are verified to be true, for the cases beta = 1 and beta = 2, for which exact expressions are known. For other cases, i.e., beta not equal 1, 2 we present approximate expressions for the scattered intensity, in the range, v(xy) = (v(x)(2) + v(y)(2))(1/2) <= 6.0 and show that the relation for f(v(xy)), given above, expresses the scattered intensity quite accurately, thus providing a simple computational methods in situations of practical importance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modeling the spatial variability that exists in pavement systems can be conveniently represented by means of random fields; in this study, a probabilistic analysis that considers the spatial variability, including the anisotropic nature of the pavement layer properties, is presented. The integration of the spatially varying log-normal random fields into a linear-elastic finite difference analysis has been achieved through the expansion optimal linear estimation method. For the estimation of the critical pavement responses, metamodels based on polynomial chaos expansion (PCE) are developed to replace the computationally expensive finite-difference model. The sparse polynomial chaos expansion based on an adaptive regression-based algorithm, and enhanced by the combined use of the global sensitivity analysis (GSA) is used, with significant savings in computational effort. The effect of anisotropy in each layer on the pavement responses was studied separately, and an effort is made to identify the pavement layer wherein the introduction of anisotropic characteristics results in the most significant impact on the critical strains. It is observed that the anisotropy in the base layer has a significant but diverse effect on both critical strains. While the compressive strain tends to be considerably higher than that observed for the isotropic section, the tensile strains show a decrease in the mean value with the introduction of base-layer anisotropy. Furthermore, asphalt-layer anisotropy also tends to decrease the critical tensile strain while having little effect on the critical compressive strain. (C) 2015 American Society of Civil Engineers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a technique for obtaining the response of linear structural systems with parameter uncertainties subjected to either deterministic or random excitation. The parameter uncertainties are modeled as random variables or random fields, and are assumed to be time-independent. The new method is an extension of the deterministic finite element method to the space of random functions.

First, the general formulation of the method is developed, in the case where the excitation is deterministic in time. Next, the application of this formulation to systems satisfying the one-dimensional wave equation with uncertainty in their physical properties is described. A particular physical conceptualization of this equation is chosen for study, and some engineering applications are discussed in both an earthquake ground motion and a structural context.

Finally, the formulation of the new method is extended to include cases where the excitation is random in time. Application of this formulation to the random response of a primary-secondary system is described. It is found that parameter uncertainties can have a strong effect on the system response characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study of human eye movements was made in order to elucidate the nature of the control mechanism in the binocular oculomotor system.

We first examined spontaneous eye movements during monocular and binocular fixation in order to determine the corrective roles of flicks and drifts. It was found that both types of motion correct fixational errors, although flicks are somewhat more active in this respect. Vergence error is a stimulus for correction by drifts but not by flicks, while binocular vertical discrepancy of the visual axes does not trigger corrective movements.

Second, we investigated the non-linearities of the oculomotor system by examining the eye movement responses to point targets moving in two dimensions in a subjectively unpredictable manner. Such motions consisted of hand-limited Gaussian random motion and also of the sum of several non-integrally related sinusoids. We found that there is no direct relationship between the phase and the gain of the oculomotor system. Delay of eye movements relative to target motion is determined by the necessity of generating a minimum afferent (input) signal at the retina in order to trigger corrective eye movements. The amplitude of the response is a function of the biological constraints of the efferent (output) portion of the system: for target motions of narrow bandwidth, the system responds preferentially to the highest frequency; for large bandwidth motions, the system distributes the available energy equally over all frequencies. Third, the power spectra of spontaneous eye movements were compared with the spectra of tracking eye movements for Gaussian random target motions of varying bandwidths. It was found that there is essentially no difference among the various curves. The oculomotor system tracks a target, not by increasing the mean rate of impulses along the motoneurons of the extra-ocular muscles, but rather by coordinating those spontaneous impulses which propagate along the motoneurons during stationary fixation. Thus, the system operates at full output at all times.

Fourth, we examined the relative magnitude and phase of motions of the left and the right visual axes during monocular and binocular viewing. We found that the two visual axes move vertically in perfect synchronization at all frequencies for any viewing condition. This is not true for horizontal motions: the amount of vergence noise is highest for stationary fixation and diminishes for tracking tasks as the bandwidth of the target motion increases. Furthermore, movements of the occluded eye are larger than those of the seeing eye in monocular viewing. This effect is more pronounced for horizontal motions, for stationary fixation, and for lower frequencies.

Finally, we have related our findings to previously known facts about the pertinent nerve pathways in order to postulate a model for the neurological binocular control of the visual axes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.

These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.

In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.

For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.

It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a model for early vision tasks such as denoising, super-resolution, deblurring, and demosaicing. The model provides a resolution-independent representation of discrete images which admits a truly rotationally invariant prior. The model generalizes several existing approaches: variational methods, finite element methods, and discrete random fields. The primary contribution is a novel energy functional which has not previously been written down, which combines the discrete measurements from pixels with a continuous-domain world viewed through continous-domain point-spread functions. The value of the functional is that simple priors (such as total variation and generalizations) on the continous-domain world become realistic priors on the sampled images. We show that despite its apparent complexity, optimization of this model depends on just a few computational primitives, which although tedious to derive, can now be reused in many domains. We define a set of optimization algorithms which greatly overcome the apparent complexity of this model, and make possible its practical application. New experimental results include infinite-resolution upsampling, and a method for obtaining subpixel superpixels. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

随着互联网和电子化办公的发展,出现了大量的文本资源。信息抽取技术可以帮助人们快速获取大规模文本中的有用信息。命名体识别与关系抽取是信息抽取的两个基本任务。本文在调研当前命名体识别和实体关系抽取中采用的主要方法的基础上,分别给出了解决方案。论文开展的主要工作有:(1)从模型选择和特征选择两个方面总结了命名体识别及实体关系抽取的国内外研究现状,重点介绍用于命名体识别的统计学习方法以及用于实体关系抽取的基于核的方法。(2)针对当前命名体识别中命名体片段边界的确定问题,研究了如何将 Semi-Markov CRFs 模型应用于中文命名体识别。这种模型只要求段间遵循马尔科夫规则,而段内的文本之间则可以被灵活的赋予各种规则。将这种模型用于中文命名体识别任务时,我们可以更有效更自由的设计出各种有利于识别出命名体片段边界的特征。实验表明,加入段相关的特征后,命名体识别的性能提高了 4-5 个百分点。(3)实体关系抽取的任务是判别两个实体之间的语义关系。之前的研究已经表明,待判别关系的两个实体间的语法树结构对于确定二者的关系类别是非常有用的,而相对成熟的基于平面特征的关系抽取方法在充分提取语法树结构特征方面的能力有限,因此,本文研究了基于核的中文实体关系抽取方法。针对中文特点,我们探讨了卷积(Convolution)核中使用不同的语法树对中文实体关系抽取性能的影响,构造了几种基于卷积核的复合核,改进了最短路依赖核。因为核方法开始被用于英文关系抽取时,F1 值也只有40%左右,而我们只使用作用在语法树上的卷积核时,中文关系抽取的F1 值达到了35%,可见核方法对中文关系抽取也是有效的。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The transfer of entanglement from optical fields to qubits provides a viable approach to entangling remote qubits in a quantum network. In cavity quantum electrodynamics, the scheme relies on the interaction between a photonic resource and two stationary intracavity atomic qubits. However, it might be hard in practice to trap two atoms simultaneously and synchronize their coupling to the cavities. To address this point, we propose and study entanglement transfer from cavities driven by an entangled external field to controlled flying qubits. We consider two exemplary non-Gaussian driving fields: NOON and entangled coherent states. We show that in the limit of long coherence time of the cavity fields, when the dynamics is approximately unitary, entanglement is transferred from the driving field to two atomic qubits that cross the cavities. On the other hand, a dissipation-dominated dynamics leads to very weakly quantum-correlated atomic systems, as witnessed by vanishing quantum discord.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study investigates the effects of ground heterogeneity, considering permeability as a random variable, on an intruding SW wedge using Monte Carlo simulations. Random permeability fields were generated, using the method of Local Average Subdivision (LAS), based on a lognormal probability density function. The LAS method allows the creation of spatially correlated random fields, generated using coefficients of variation (COV) and horizontal and vertical scales of fluctuation (SOF). The numerical modelling code SUTRA was employed to solve the coupled flow and transport problem. The well-defined 2D dispersive Henry problem was used as the test case for the method. The intruding SW wedge is defined by two key parameters, the toe penetration length (TL) and the width of mixing zone (WMZ). These parameters were compared to the results of a homogeneous case simulated using effective permeability values. The simulation results revealed: (1) an increase in COV resulted in a seaward movement of TL; (2) the WMZ extended with increasing COV; (3) a general increase in horizontal and vertical SOF produced a seaward movement of TL, with the WMZ increasing slightly; (4) as the anisotropic ratio increased the TL intruded further inland and the WMZ reduced in size. The results show that for large values of COV, effective permeability parameters are inadequate at reproducing the effects of heterogeneity on SW intrusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual salience is an intriguing phenomenon observed in biological neural systems. Numerous attempts have been made to model visual salience mathematically using various feature contrasts, either locally or globally. However, these algorithmic models tend to ignore the problem’s biological solutions, in which visual salience appears to arise during the propagation of visual stimuli along the visual cortex. In this paper, inspired by the conjecture that salience arises from deep propagation along the visual cortex, we present a Deep Salience model where a multi-layer model based on successive Markov random fields (sMRF) is proposed to analyze the input image successively through its deep belief propagation. As a result, the foreground object can be automatically separated from the background in a fully unsupervised way. Experimental evaluation on the benchmark dataset validated that our Deep Salience model can consistently outperform eleven state-of-the-art salience models, yielding the higher rates in the precision-recall tests and attaining the best F-measure and mean-square error in the experiments.