950 resultados para Gaussian random fields
Resumo:
A poorly understood phenomenon seen in complex systems is diffusion characterized by Hurst exponent H approximate to 1/2 but with non-Gaussian statistics. Motivated by such empirical findings, we report an exact analytical solution for a non-Markovian random walk model that gives rise to weakly anomalous diffusion with H = 1/2 but with a non-Gaussian propagator.
Resumo:
Spatial linear models have been applied in numerous fields such as agriculture, geoscience and environmental sciences, among many others. Spatial dependence structure modelling, using a geostatistical approach, is an indispensable tool to estimate the parameters that define this structure. However, this estimation may be greatly affected by the presence of atypical observations in the sampled data. The purpose of this paper is to use diagnostic techniques to assess the sensitivity of the maximum-likelihood estimators, covariance functions and linear predictor to small perturbations in the data and/or the spatial linear model assumptions. The methodology is illustrated with two real data sets. The results allowed us to conclude that the presence of atypical values in the sample data have a strong influence on thematic maps, changing the spatial dependence structure.
Resumo:
This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.
Resumo:
Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.
Resumo:
A method for quantifying nociceptive withdrawal reflex receptive fields in human volunteers and patients is described. The reflex receptive field (RRF) for a specific muscle denotes the cutaneous area from which a muscle contraction can be evoked by a nociceptive stimulus. The method is based on random stimulations presented in a blinded sequence to 10 stimulation sites. The sensitivity map is derived by interpolating the reflex responses evoked from the 10 sites. A set of features describing the size and location of the RRF is presented based on statistical analysis of the sensitivity map within every subject. The features include RRF area, volume, peak location and center of gravity. The method was applied to 30 healthy volunteers. Electrical stimuli were applied to the sole of the foot evoking reflexes in the ankle flexor tibialis anterior. The RRF area covered a fraction of 0.57+/-0.06 (S.E.M.) of the foot and was located on the medial, distal part of the sole of the foot. An intramuscular injection into flexor digitorum brevis of capsaicin was performed in one spinal cord injured subject to attempt modulation of the reflex receptive field. The RRF area, RRF volume and location of the peak reflex response appear to be the most sensitive measures for detecting modulation of spinal nociceptive processing. This new method has important potential applications for exploring aspects of central plasticity in volunteers and patients. It may be utilized as a new diagnostic tool for central hypersensitivity and quantification of therapeutic interventions.
Resumo:
Focusing optical beams on a target through random propagation media is very important in many applications such as free space optical communica- tions and laser weapons. Random media effects such as beam spread and scintillation can degrade the optical system's performance severely. Compensation schemes are needed in these applications to overcome these random media effcts. In this research, we investigated the optimal beams for two different optimization criteria: one is to maximize the concentrated received intensity and the other is to minimize the scintillation index at the target plane. In the study of the optimal beam to maximize the weighted integrated intensity, we derive a similarity relationship between pupil-plane phase screen and extended Huygens-Fresnel model, and demonstrate the limited utility of maximizing the average integrated intensity. In the study ofthe optimal beam to minimize the scintillation index, we derive the first- and second-order moments for the integrated intensity of multiple coherent modes. Hermite-Gaussian and Laguerre-Gaussian modes are used as the coherent modes to synthesize an optimal partially coherent beam. The optimal beams demonstrate evident reduction of scintillation index, and prove to be insensitive to the aperture averaging effect.
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.
Resumo:
We present a framework for fitting multiple random walks to animal movement paths consisting of ordered sets of step lengths and turning angles. Each step and turn is assigned to one of a number of random walks, each characteristic of a different behavioral state. Behavioral state assignments may be inferred purely from movement data or may include the habitat type in which the animals are located. Switching between different behavioral states may be modeled explicitly using a state transition matrix estimated directly from data, or switching probabilities may take into account the proximity of animals to landscape features. Model fitting is undertaken within a Bayesian framework using the WinBUGS software. These methods allow for identification of different movement states using several properties of observed paths and lead naturally to the formulation of movement models. Analysis of relocation data from elk released in east-central Ontario, Canada, suggests a biphasic movement behavior: elk are either in an "encamped" state in which step lengths are small and turning angles are high, or in an "exploratory" state, in which daily step lengths are several kilometers and turning angles are small. Animals encamp in open habitat (agricultural fields and opened forest), but the exploratory state is not associated with any particular habitat type.
Resumo:
Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.
Resumo:
Reverberation chambers are well known for providing a random-like electric field distribution. Detection of directivity or gain thereof requires an adequate procedure and smart post-processing. In this paper, a new method is proposed for estimating the directivity of radiating devices in a reverberation chamber (RC). The method is based on the Rician K-factor whose estimation in an RC benefits from recent improvements. Directivity estimation relies on the accurate determination of the K-factor with respect to a reference antenna. Good agreement is reported with measurements carried out in near-field anechoic chamber (AC) and using a near-field to far-field transformation.
Resumo:
The random switching of measurement bases is commonly assumed to be a necessary step of quantum key distribution protocols. In this paper we present a no-switching protocol and show that switching is not required for coherent-state continuous-variable quantum key distribution. Further, this protocol achieves higher information rates and a simpler experimental setup compared to previous protocols that rely on switching. We propose an optimal eavesdropping attack against this protocol, assuming individual Gaussian attacks. Finally, we investigate and compare the no-switching protocol applied to the original Bennett-Brassard 1984 scheme.
Resumo:
In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Gaussian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.
Resumo:
A Bayesian procedure for the retrieval of wind vectors over the ocean using satellite borne scatterometers requires realistic prior near-surface wind field models over the oceans. We have implemented carefully chosen vector Gaussian Process models; however in some cases these models are too smooth to reproduce real atmospheric features, such as fronts. At the scale of the scatterometer observations, fronts appear as discontinuities in wind direction. Due to the nature of the retrieval problem a simple discontinuity model is not feasible, and hence we have developed a constrained discontinuity vector Gaussian Process model which ensures realistic fronts. We describe the generative model and show how to compute the data likelihood given the model. We show the results of inference using the model with Markov Chain Monte Carlo methods on both synthetic and real data.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.