919 resultados para Gaussian quadrature formulas.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This mobility prediction model is trained using sample executions of motion primitives on representative terrain, and predicts the future outcome of control actions on similar terrain. Using Gaussian process regression allows us to exploit its inherent measure of prediction uncertainty in planning. We integrate mobility prediction into a Markov decision process framework and use dynamic programming to construct a control policy for navigation to a goal region in a terrain map built using an on-board depth sensor. We consider both rigid terrain, consisting of uneven ground, small rocks, and non-traversable rocks, and also deformable terrain. We introduce two methods for training the mobility prediction model from either proprioceptive or exteroceptive observations, and report results from nearly 300 experimental trials using a planetary rover platform in a Mars-analogue environment. Our results validate the approach and demonstrate the value of planning under uncertainty for safe and reliable navigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops maximum likelihood (ML) estimation schemes for finite-state semi-Markov chains in white Gaussian noise. We assume that the semi-Markov chain is characterised by transition probabilities of known parametric from with unknown parameters. We reformulate this hidden semi-Markov model (HSM) problem in the scalar case as a two-vector homogeneous hidden Markov model (HMM) problem in which the state consist of the signal augmented by the time to last transition. With this reformulation we apply the expectation Maximumisation (EM ) algorithm to obtain ML estimates of the transition probabilities parameters, Markov state levels and noise variance. To demonstrate our proposed schemes, motivated by neuro-biological applications, we use a damped sinusoidal parameterised function for the transition probabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper conditional hidden Markov model (HMM) filters and conditional Kalman filters (KF) are coupled together to improve demodulation of differential encoded signals in noisy fading channels. We present an indicator matrix representation for differential encoded signals and the optimal HMM filter for demodulation. The filter requires O(N3) calculations per time iteration, where N is the number of message symbols. Decision feedback equalisation is investigated via coupling the optimal HMM filter for estimating the message, conditioned on estimates of the channel parameters, and a KF for estimating the channel states, conditioned on soft information message estimates. The particular differential encoding scheme examined in this paper is differential phase shift keying. However, the techniques developed can be extended to other forms of differential modulation. The channel model we use allows for multiplicative channel distortions and additive white Gaussian noise. Simulation studies are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abnormal event detection has attracted a lot of attention in the computer vision research community during recent years due to the increased focus on automated surveillance systems to improve security in public places. Due to the scarcity of training data and the definition of an abnormality being dependent on context, abnormal event detection is generally formulated as a data-driven approach where activities are modeled in an unsupervised fashion during the training phase. In this work, we use a Gaussian mixture model (GMM) to cluster the activities during the training phase, and propose a Gaussian mixture model based Markov random field (GMM-MRF) to estimate the likelihood scores of new videos in the testing phase. Further-more, we propose two new features: optical acceleration, and the histogram of optical flow gradients; to detect the presence of any abnormal objects and speed violations in the scene. We show that our proposed method outperforms other state of the art abnormal event detection algorithms on publicly available UCSD dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Active learning approaches reduce the annotation cost required by traditional supervised approaches to reach the same effectiveness by actively selecting informative instances during the learning phase. However, effectiveness and robustness of the learnt models are influenced by a number of factors. In this paper we investigate the factors that affect the effectiveness, more specifically in terms of stability and robustness, of active learning models built using conditional random fields (CRFs) for information extraction applications. Stability, defined as a small variation of performance when small variation of the training data or a small variation of the parameters occur, is a major issue for machine learning models, but even more so in the active learning framework which aims to minimise the amount of training data required. The factors we investigate are a) the choice of incremental vs. standard active learning, b) the feature set used as a representation of the text (i.e., morphological features, syntactic features, or semantic features) and c) Gaussian prior variance as one of the important CRFs parameters. Our empirical findings show that incremental learning and the Gaussian prior variance lead to more stable and robust models across iterations. Our study also demonstrates that orthographical, morphological and contextual features as a group of basic features play an important role in learning effective models across all iterations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method of estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. Methods A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shape parameters of these beta distributions were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures i.e. prevalence and incidence as reported by other studies. Results Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ~120% over 12 years. Nationally, an estimated 356,000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (~90% increase). Conclusions We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between images acquired from different individuals or atlases. Furthermore, certain applications require this type of registration to operate across images acquired from different imaging modalities. One popular local approach for estimating this registration is a block matching procedure utilising the mutual information criterion. However, previous block matching procedures generate a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. This paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo statistical procedure to optimally select grid points of interest. Three different methods are then compared to propagate the estimated sparse deformation field to the entire image including a thin-plate spline warp, Gaussian convolution, and a hybrid fluid technique. Results show that non-rigid registration can be improved by using the proposed algorithm to optimally select grid points of interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Successful prediction of groundwater flow and solute transport through highly heterogeneous aquifers has remained elusive due to the limitations of methods to characterize hydraulic conductivity (K) and generate realistic stochastic fields from such data. As a result, many studies have suggested that the classical advective-dispersive equation (ADE) cannot reproduce such transport behavior. Here we demonstrate that when high-resolution K data are used with a fractal stochastic method that produces K fields with adequate connectivity, the classical ADE can accurately predict solute transport at the macrodispersion experiment site in Mississippi. This development provides great promise to accurately predict contaminant plume migration, design more effective remediation schemes, and reduce environmental risks. Key Points Non-Gaussian transport behavior at the MADE site is unraveledADE can reproduce tracer transport in heterogeneous aquifers with no calibrationNew fractal method generates heterogeneous K fields with adequate connectivity

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A long-held assumption in entrepreneurship research is that normal (i.e., Gaussian) distributions characterize variables of interest for both theory and practice. We challenge this assumption by examining more than 12,000 nascent, young, and hyper-growth firms. Results reveal that variables which play central roles in resource-, cognition-, action-, and environment-based entrepreneurship theories exhibit highly skewed power law distributions, where a few outliers account for a disproportionate amount of the distribution's total output. Our results call for the development of new theory to explain and predict the mechanisms that generate these distributions and the outliers therein. We offer a research agenda, including a description of non-traditional methodological approaches, to answer this call.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The commercialization of aerial image processing is highly dependent on the platforms such as UAVs (Unmanned Aerial Vehicles). However, the lack of an automated UAV forced landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace. This article proposes a UAV forced landing site detection system that is based on machine learning approaches including the Gaussian Mixture Model and the Support Vector Machine. A range of learning parameters are analysed including the number of Guassian mixtures, support vector kernels including linear, radial basis function Kernel (RBF) and polynormial kernel (poly), and the order of RBF kernel and polynormial kernel. Moreover, a modified footprint operator is employed during feature extraction to better describe the geometric characteristics of the local area surrounding a pixel. The performance of the presented system is compared to a baseline UAV forced landing site detection system which uses edge features and an Artificial Neural Network (ANN) region type classifier. Experiments conducted on aerial image datasets captured over typical urban environments reveal improved landing site detection can be achieved with an SVM classifier with an RBF kernel using a combination of colour and texture features. Compared to the baseline system, the proposed system provides significant improvement in term of the chance to detect a safe landing area, and the performance is more stable than the baseline in the presence of changes to the UAV altitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6]. In [8] Gentry, Peikert and Vaikuntanathan proposed a randomized version of Babai’s nearest plane algorithm such that the distribution of a reduced vector modulo a secret parallelepiped only depended on the size of the base used. Using this algorithm and generating large, close to uniform, public keys they managed to get provably secure GGH-like lattice-based signatures. Recently, Stehlé and Steinfeld obtained a provably secure scheme very close to NTRUSign [26] (from a theoretical point of view). In this paper we present an alternative approach to seal the leak of NTRUSign. Instead of modifying the lattices and algorithms used, we do a classic leaky NTRUSign signature and hide it with gaussian noise using techniques present in Lyubashevky’s signatures. Our main contributions are thus a set of strong NTRUSign parameters, obtained by taking into account latest known attacks against the scheme, a statistical way to hide the leaky NTRU signature so that this particular instantiation of CVP-based signature scheme becomes zero-knowledge and secure against forgeries, based on the worst-case hardness of the O~(N1.5)-Shortest Independent Vector Problem over NTRU lattices. Finally, we give a set of concrete parameters to gauge the efficiency of the obtained signature scheme.