253 resultados para Kingsbury
Resumo:
A dynamic programming algorithm for joint data detection and carrier phase estimation of continuous-phase-modulated signal is presented. The intent is to combine the robustness of noncoherent detectors with the superior performance of coherent ones. The algorithm differs from the Viterbi algorithm only in the metric that it maximizes over the possible transmitted data sequences. This metric is influenced both by the correlation with the received signal and the current estimate of the carrier phase. Carrier-phase estimation is based on decision guiding, but there is no external phase-locked loop. Instead, the phase of the best complex correlation with the received signal over the last few signaling intervals is used. The algorithm is slightly more complex than the coherent Viterbi algorithm but does not require narrowband filtering of the recovered carrier, as earlier appproaches did, to achieve the same level of performance.
Resumo:
This paper describes a speech coding technique that has been developed in order to provide a method of digitising speech at bit rates in the range 4. 8 to 8 kb/s, that is insensitive to the effects of acoustic background noise and bit errors on the digital link. The main aim has been to develop a coding scheme which provides speech quality and robustness against noise and errors that is similar to a 16000 b/s continuously variable slope delta (CVSD) coder, but which operates at half its data rate or less. A desirable aim was to keep the complexity of the coding scheme within the scope of what could reasonably be handled by current signal processing chips or by a single custom integrated circuit. Applications areas include mobile radio and small Satcomms terminals.
Resumo:
A block-based motion estimation technique is proposed which permits a less general segmentation performed using an efficient deterministic algorithm. Applied to image pairs from the Flower Garden and Table Tennis sequences, the algorithm successfully localizes motion discontinuities and detects uncovered regions. The algorithm is implemented in C on a Sun Sparcstation 20. The gradient-based motion estimation required 28.8 s CPU time, and 500 iterations of the segmentation algorithm required 32.6 s.
Resumo:
In this paper, we describe a video tracking application using the dual-tree polar matching algorithm. The models are specified in a probabilistic setting, and a particle ilter is used to perform the sequential inference. Computer simulations demonstrate the ability of the algorithm to track a simulated video moving target in an urban environment with complete and partial occlusions. © The Institution of Engineering and Technology.
Resumo:
This paper introduces the Interlevel Product (ILP) which is a transform based upon the Dual-Tree Complex Wavelet. Coefficients of the ILP have complex values whose magnitudes indicate the amplitude of multilevel features, and whose phases indicate the nature of these features (e.g. ridges vs. edges). In particular, the phases of ILP coefficients are approximately invariant to small shifts in the original images. We accordingly introduce this transform as a solution to coarse scale template matching, where alignment concerns between decimation of a target and decimation of a larger search image can be mitigated, and computational efficiency can be maintained. Furthermore, template matching with ILP coefficients can provide several intuitive "near-matches" that may be of interest in image retrieval applications. © 2005 IEEE.
Resumo:
This paper introduces a method by which intuitive feature entities can be created from ILP (InterLevel Product) coefficients. The ILP transform is a pyramid of decimated complex-valued coefficients at multiple scales, derived from dual-tree complex wavelets, whose phases indicate the presence of different feature types (edges and ridges). We use an Expectation-Maximization algorithm to cluster large ILP coefficients that are spatially adjacent and similar in phase. We then demonstrate the relationship that these clusters possess with respect to observable image content, and conclude with a look at potential applications of these clusters, such as rotation- and scale-invariant object recognition. © 2005 IEEE.
Semantic Discriminant mapping for classification and browsing of remote sensing textures and objects
Resumo:
We present a new approach based on Discriminant Analysis to map a high dimensional image feature space onto a subspace which has the following advantages: 1. each dimension corresponds to a semantic likelihood, 2. an efficient and simple multiclass classifier is proposed and 3. it is low dimensional. This mapping is learnt from a given set of labeled images with a class groundtruth. In the new space a classifier is naturally derived which performs as well as a linear SVM. We will show that projecting images in this new space provides a database browsing tool which is meaningful to the user. Results are presented on a remote sensing database with eight classes, made available online. The output semantic space is a low dimensional feature space which opens perspectives for other recognition tasks. © 2005 IEEE.
Resumo:
Abstract—There are sometimes occasions when ultrasound beamforming is performed with only a subset of the total data that will eventually be available. The most obvious example is a mechanically-swept (wobbler) probe in which the three-dimensional data block is formed from a set of individual B-scans. In these circumstances, non-blind deconvolution can be used to improve the resolution of the data. Unfortunately, most of these situations involve large blocks of three-dimensional data. Furthermore, the ultrasound blur function varies spatially with distance from the transducer. These two facts make the deconvolution process time-consuming to implement. This paper is about ways to address this problem and produce spatially-varying deconvolution of large blocks of three-dimensional data in a matter of seconds. We present two approaches, one based on hardware and the other based on software. We compare the time they each take to achieve similar results and discuss the computational resources and form of blur model that each requires.
Resumo:
We present a novel method to perform an accurate registration of 3-D nonrigid bodies by using phase-shift properties of the dual-tree complex wavelet transform (DT-CWT). Since the phases of DT-\BBCWT coefficients change approximately linearly with the amount of feature displacement in the spatial domain, motion can be estimated using the phase information from these coefficients. The motion estimation is performed iteratively: first by using coarser level complex coefficients to determine large motion components and then by employing finer level coefficients to refine the motion field. We use a parametric affine model to describe the motion, where the affine parameters are found locally by substituting into an optical flow model and by solving the resulting overdetermined set of equations. From the estimated affine parameters, the motion field between the sensed and the reference data sets can be generated, and the sensed data set then can be shifted and interpolated spatially to align with the reference data set. © 2011 IEEE.
Resumo:
The University of Cambridge is unusual in that its Department of Engineering is a single department which covers virtually all branches of engineering under one roof. In their first two years of study, our undergrads study the full breadth of engineering topics and then have to choose a specialization area for the final two years of study. Here we describe part of a course, given towards the end of their second year, which is designed to entice these students to specialize in signal processing and information engineering topics for years 3 and 4. The course is based around a photo editor and an image search application, and it requires no prior knowledge of the z-transform or of 2-dimensional signal processing. It does assume some knowledge of 1-D convolution and basic Fourier methods and some prior exposure to Matlab. The subject of this paper, the photo editor, is written in standard Matlab m-files which are fully visible to the students and help them to see how specific algorithms are implemented in detail. © 2011 IEEE.
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Resumo:
This paper develops an algorithm for finding sparse signals from limited observations of a linear system. We assume an adaptive Gaussian model for sparse signals. This model results in a least square problem with an iteratively reweighted L2 penalty that approximates the L0-norm. We propose a fast algorithm to solve the problem within a continuation framework. In our examples, we show that the correct sparsity map and sparsity level are gradually learnt during the iterations even when the number of observations is reduced, or when observation noise is present. In addition, with the help of sophisticated interscale signal models, the algorithm is able to recover signals to a better accuracy and with reduced number of observations than typical L1-norm and reweighted L1 norm methods. ©2010 IEEE.