9 resultados para fractional random fields
em CentAUR: Central Archive University of Reading - UK
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
The LiHoxY1-xF4 magnetic material in a transverse magnetic field Bxx̂ perpendicular to the Ising spin direction has long been used to study tunable quantum phase transitions in a random disordered system. We show that the Bx-induced magnetization along the x̂ direction, combined with the local random dilution-induced destruction of crystalline symmetries, generates, via the predominant dipolar interactions between Ho3+ ions, random fields along the Ising ẑ direction. This identifies LiHoxY1-xF4 in Bx as a new random field Ising system. The random fields explain the rapid decrease of the critical temperature in the diluted ferromagnetic regime and the smearing of the nonlinear susceptibility at the spin-glass transition with increasing Bx and render the Bx-induced quantum criticality in LiHoxY1-xF4 likely inaccessible.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
Matheron's usual variogram estimator can result in unreliable variograms when data are strongly asymmetric or skewed. Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. This paper examines the effects of underlying asymmetry on the variogram and on the accuracy of prediction, and the second one examines the effects arising from outliers. Standard geostatistical texts suggest ways of dealing with underlying asymmetry; however, this is based on informed intuition rather than detailed investigation. To determine whether the methods generally used to deal with underlying asymmetry are appropriate, the effects of different coefficients of skewness on the shape of the experimental variogram and on the model parameters were investigated. Simulated annealing was used to create normally distributed random fields of different size from variograms with different nugget:sill ratios. These data were then modified to give different degrees of asymmetry and the experimental variogram was computed in each case. The effects of standard data transformations on the form of the variogram were also investigated. Cross-validation was used to assess quantitatively the performance of the different variogram models for kriging. The results showed that the shape of the variogram was affected by the degree of asymmetry, and that the effect increased as the size of data set decreased. Transformations of the data were more effective in reducing the skewness coefficient in the larger sets of data. Cross-validation confirmed that variogram models from transformed data were more suitable for kriging than were those from the raw asymmetric data. The results of this study have implications for the 'standard best practice' in dealing with asymmetry in data for geostatistical analyses. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. The first paper of this series examined the effects of the former on the variogram and this paper examines the effects of asymmetry arising from outliers. Simulated annealing was used to create normally distributed random fields of different size that are realizations of known processes described by variograms with different nugget:sill ratios. These primary data sets were then contaminated with randomly located and spatially aggregated outliers from a secondary process to produce different degrees of asymmetry. Experimental variograms were computed from these data by Matheron's estimator and by three robust estimators. The effects of standard data transformations on the coefficient of skewness and on the variogram were also investigated. Cross-validation was used to assess the performance of models fitted to experimental variograms computed from a range of data contaminated by outliers for kriging. The results showed that where skewness was caused by outliers the variograms retained their general shape, but showed an increase in the nugget and sill variances and nugget:sill ratios. This effect was only slightly more for the smallest data set than for the two larger data sets and there was little difference between the results for the latter. Overall, the effect of size of data set was small for all analyses. The nugget:sill ratio showed a consistent decrease after transformation to both square roots and logarithms; the decrease was generally larger for the latter, however. Aggregated outliers had different effects on the variogram shape from those that were randomly located, and this also depended on whether they were aggregated near to the edge or the centre of the field. The results of cross-validation showed that the robust estimators and the removal of outliers were the most effective ways of dealing with outliers for variogram estimation and kriging. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
Resumo:
[1] In many practical situations where spatial rainfall estimates are needed, rainfall occurs as a spatially intermittent phenomenon. An efficient geostatistical method for rainfall estimation in the case of intermittency has previously been published and comprises the estimation of two independent components: a binary random function for modeling the intermittency and a continuous random function that models the rainfall inside the rainy areas. The final rainfall estimates are obtained as the product of the estimates of these two random functions. However the published approach does not contain a method for estimation of uncertainties. The contribution of this paper is the presentation of the indicator maximum likelihood estimator from which the local conditional distribution of the rainfall value at any location may be derived using an ensemble approach. From the conditional distribution, representations of uncertainty such as the estimation variance and confidence intervals can be obtained. An approximation to the variance can be calculated more simply by assuming rainfall intensity is independent of location within the rainy area. The methodology has been validated using simulated and real rainfall data sets. The results of these case studies show good agreement between predicted uncertainties and measured errors obtained from the validation data.
Resumo:
An incidence matrix analysis is used to model a three-dimensional network consisting of resistive and capacitive elements distributed across several interconnected layers. A systematic methodology for deriving a descriptor representation of the network with random allocation of the resistors and capacitors is proposed. Using a transformation of the descriptor representation into standard state-space form, amplitude and phase admittance responses of three-dimensional random RC networks are obtained. Such networks display an emergent behavior with a characteristic Jonscher-like response over a wide range of frequencies. A model approximation study of these networks is performed to infer the admittance response using integral and fractional order models. It was found that a fractional order model with only seven parameters can accurately describe the responses of networks composed of more than 70 nodes and 200 branches with 100 resistors and 100 capacitors. The proposed analysis can be used to model charge migration in amorphous materials, which may be associated to specific macroscopic or microscopic scale fractal geometrical structures in composites displaying a viscoelastic electromechanical response, as well as to model the collective responses of processes governed by random events described using statistical mechanics.