14 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
em Cambridge University Engineering Department Publications Database
Resumo:
There is growing evidence that focal thinning of cortical bone in the proximal femur may predispose a hip to fracture. Detecting such defects in clinical CT is challenging, since cortices may be significantly thinner than the imaging system's point spread function. We recently proposed a model-fitting technique to measure sub-millimetre cortices, an ill-posed problem which was regularized by assuming a specific, fixed value for the cortical density. In this paper, we develop the work further by proposing and evaluating a more rigorous method for estimating the constant cortical density, and extend the paradigm to encompass the mapping of cortical mass (mineral mg/cm(2)) in addition to thickness. Density, thickness and mass estimates are evaluated on sixteen cadaveric femurs, with high resolution measurements from a micro-CT scanner providing the gold standard. The results demonstrate robust, accurate measurement of peak cortical density and cortical mass. Cortical thickness errors are confined to regions of thin cortex and are bounded by the extent to which the local density deviates from the peak, averaging 20% for 0.5mm cortex.
Resumo:
There is growing evidence that focal thinning of cortical bone in the proximal femur may predispose a hip to fracture. Detecting such defects in clinical CT is challenging, since cortices may be significantly thinner than the imaging system's point spread function. We recently proposed a model-fitting technique to measure sub-millimetre cortices, an ill-posed problem which was regularized by assuming a specific, fixed value for the cortical density. In this paper, we develop the work further by proposing and evaluating a more rigorous method for estimating the constant cortical density, and extend the paradigm to encompass the mapping of cortical mass (mineral mg/cm 2) in addition to thickness. Density, thickness and mass estimates are evaluated on sixteen cadaveric femurs, with high resolution measurements from a micro-CT scanner providing the gold standard. The results demonstrate robust, accurate measurement of peak cortical density and cortical mass. Cortical thickness errors are confined to regions of thin cortex and are bounded by the extent to which the local density deviates from the peak, averaging 20% for 0.5mm cortex. © 2012 Elsevier B.V.
Resumo:
Demodulation is an ill-posed problem whenever both carrier and envelope signals are broadband and unknown. Here, we approach this problem using the methods of probabilistic inference. The new approach, called Probabilistic Amplitude Demodulation (PAD), is computationally challenging but improves on existing methods in a number of ways. By contrast to previous approaches to demodulation, it satisfies five key desiderata: PAD has soft constraints because it is probabilistic; PAD is able to automatically adjust to the signal because it learns parameters; PAD is user-steerable because the solution can be shaped by user-specific prior information; PAD is robust to broad-band noise because this is modeled explicitly; and PAD's solution is self-consistent, empirically satisfying a Carrier Identity property. Furthermore, the probabilistic view naturally encompasses noise and uncertainty, allowing PAD to cope with missing data and return error bars on carrier and envelope estimates. Finally, we show that when PAD is applied to a bandpass-filtered signal, the stop-band energy of the inferred carrier is minimal, making PAD well-suited to sub-band demodulation. © 2006 IEEE.
Resumo:
Amplitude demodulation is an ill-posed problem and so it is natural to treat it from a Bayesian viewpoint, inferring the most likely carrier and envelope under probabilistic constraints. One such treatment is Probabilistic Amplitude Demodulation (PAD), which, whilst computationally more intensive than traditional approaches, offers several advantages. Here we provide methods for estimating the uncertainty in the PAD-derived envelopes and carriers, and for learning free-parameters like the time-scale of the envelope. We show how the probabilistic approach can naturally handle noisy and missing data. Finally, we indicate how to extend the model to signals which contain multiple modulators and carriers.
Resumo:
in this contribution we discuss a stochastic framework for air traffic conflict resolution. The conflict resolution task is posed as the problem of optimizing an expected value criterion. Optimization is carried out by Monte Carlo Markov Chain (MCMC) simulation. A numerical example illustrates the proposed strategy. Copyright © 2005 IFAC.
Resumo:
A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude-and frequency-demodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings.
Resumo:
There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.
Resumo:
The authors demonstrate that a widely proposed method of robot dynamic control can be inherently unstable, due to an algebraic feedback loop condition causing an ill-posed feedback system. By focussing on the concept of ill-posedness a necessary and sufficient condition is derived for instability in robot manipulator systems which incorporate online acceleration cross-coupling control. Also demonstrated is a quasilinear multivariable control framework useful for assessing the robustness of this type of control when the instability condition is not obeyed.
Resumo:
An implementation of the inverse vector Jiles-Atherton model for the solution of non-linear hysteretic finite element problems is presented. The implementation applies the fixed point method with differential reluctivity values obtained from the Jiles-Atherton model. Differential reluctivities are usually computed using numerical differentiation, which is ill-posed and amplifies small perturbations causing large sudden increases or decreases of differential reluctivity values, which may cause numerical problems. A rule based algorithm for conditioning differential reluctivity values is presented. Unwanted perturbations on the computed differential reluctivity values are eliminated or reduced with the aim to guarantee convergence. Details of the algorithm are presented together with an evaluation of the algorithm by a numerical example. The algorithm is shown to guarantee convergence, although the rate of convergence depends on the choice of algorithm parameters. © 2011 IEEE.
Resumo:
Current research into the process of engineering design is extending the use of computers towards the acquisition, representation and application of design process knowledge in addition to the existing storage and manipulation of product-based models of design objects. This is a difficult task because the design of mechanical systems is a complex, often unpredictable process involving ill-structured problem solving skills and large amounts of knowledge, some which may be of an incomplete and subjective nature. Design problems require the integration of a variety of modes of working such as numerical, graphical, algorithmic or heuristic and demand products through synthesis, analysis and evaluation activities.
This report presents the results of a feasibility study into the blackboard approach and discusses the development of an initial prototype system that will enable an alphanumeric design dialogue between a designer and an expert to be analysed in a formal way, thus providing real-life protocol data on which to base the blackboard message structures.
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
The scattering of sound from a point source by a Rankine vortex is investigated numerically by solving the Euler equations with the novel high-resolution CABARET method. For several Mach numbers of the vortex, the time-average amplitudes of the scattered field obtained from the numerical modeling are compared with the theoretical scaling laws' predictions. Copyright © 2009 by Sergey Karabasov.
Resumo:
Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.