945 resultados para Local phase quantization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Science and technology are promoted as major contributors to national development. Consequently, improved science education has been placed high on the agenda of tasks to be tackled in many developing countries, although progress has often been limited. In fact there have been claims that the enormous investment in teaching science in developing countries has basically failed, with many reports of how efforts to teach science in developing countries often result in rote learning of strange concepts, mere copying of factual information, and a general lack of understanding on the part of local students. These generalisations can be applied to science education in Fiji. Muralidhar (1989) has described a situation in which upper primary and middle school students in Fiji were given little opportunity to engage in practical work; an extremely didactic form of teacher exposition was the predominant method of instruction during science lessons. He concluded that amongst other things, teachers' limited understanding, particularly of aspects of physical science, resulted in their rigid adherence to the text book or the omission of certain activities or topics. Although many of the problems associated with science education in developing countries have been documented, few attempts have been made to understand how non-Western students might better learn science. This study addresses the issue of Fiji pre-service primary teachers' understanding of a key aspect of physical science, namely, matter and how it changes, and their responses to learning experiences based on a constructivist epistemology. Initial interviews were used to probe pre-service primary teachers' understanding of this domain of science. The data were analysed to identify students' alternative and scientific conceptions. These conceptions were then used to construct Concept Profile Inventories (CPI) which allowed for qualitative comparison of the concepts of the two ethnic groups who took part in the study. This phase of the study also provided some insight into the interaction of scientific information and traditional beliefs in non-Western societies. A quantitative comparison of the groups' conceptions was conducted using a Science Concept Survey instrument developed from the CPis. These data provided considerable insight into the aspects of matter where the pre-service teachers' understanding was particularly weak. On the basis of these preliminary findings, a six-week teaching program aimed at improving the students' understanding of matter was implemented in an experimental design with a group of students. The intervention involved elements of pedagogy such as the use of analogies and concept maps which were novel to most of those who took part. At the conclusion of the teaching programme, the learning outcomes of the experimental group were compared with those of a control group taught in a more traditional manner. These outcomes were assessed quantitatively by means of pre- and post-tests and a delayed post-test, and qualitatively using an interview protocol. The students' views on the various teaching strategies used with the experimental group were also sought. The findings indicate that in the domain of matter little variation exists in the alternative conceptions held by Fijian and Indian students suggesting that cultural influences may be minimal in their construction. Furthermore, the teaching strategies implemented with the experimental group of students, although largely derived from Western research, showed considerable promise in the context of Fiji, where they appeared to be effective in improving the understanding of students from different cultural backgrounds. These outcomes may be of significance to those involved in teacher education and curriculum development in other developing countries.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is focussed on developing a commissioning procedure so that a Monte Carlo model, which uses BEAMnrc’s standard VARMLC component module, can be adapted to match a specific BrainLAB m3 micro-multileaf collimator (μMLC). A set of measurements are recommended, for use as a reference against which the model can be tested and optimised. These include radiochromic film measurements of dose from small and offset fields, as well as measurements of μMLC transmission and interleaf leakage. Simulations and measurements to obtain μMLC scatter factors are shown to be insensitive to relevant model parameters and are therefore not recommended, unless the output of the linear accelerator model is in doubt. Ultimately, this note provides detailed instructions for those intending to optimise a VARMLC model to match the dose delivered by their local BrainLAB m3 μMLC device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a new method for utilising phase information by complementing it with traditional magnitude-only spectral subtraction speech enhancement through Complex Spectrum Subtraction (CSS). The proposed approach has the following advantages over traditional magnitude-only spectral subtraction: (a) it introduces complementary information to the enhancement algorithm; (b) it reduces the total number of algorithmic parameters, and; (c) is designed for improving clean speech magnitude spectra and is therefore suitable for both automatic speech recognition (ASR) and speech perception applications. Oracle-based ASR experiments verify this approach, showing an average of 20% relative word accuracy improvements when accurate estimates of the phase spectrum are available. Based on sinusoidal analysis and assuming stationarity between observations (which is shown to be better approximated as the frame rate is increased), this paper also proposes a novel method for acquiring the phase information called Phase Estimation via Delay Projection (PEDEP). Further oracle ASR experiments validate the potential for the proposed PEDEP technique in ideal conditions. Realistic implementation of CSS with PEDEP shows performance comparable to state of the art spectral subtraction techniques in a range of 15-20 dB signal-to-noise ratio environments. These results clearly demonstrate the potential for using phase spectra in spectral subtractive enhancement applications, and at the same time highlight the need for deriving more accurate phase estimates in a wider range of noise conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel approach to road-traffic control for interconnected junctions. With a local fuzzy-logic controller (FLC) installed at each junction, a dynamic-programming (DP) technique is proposed to derive the green time for each phase in a traffic-light cycle. Coordination parameters from the adjacent junctions are also taken into consideration so that organized control is extended beyond a single junction. Instead of pursuing the absolute optimization of traffic delay, this study examines a practical approach to enable the simple implementation of coordination among junctions, while attempting to reduce delays, if possible. The simulation results show that the delay per vehicle can be substantially reduced, particularly when the traffic demand reaches the junction capacity. The implementation of this controller does not require complicated or demanding hardware, and such simplicity makes it a useful tool for offline studies or realtime control purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Policymakers often propose strict enforcement strategies to fight the shadow economy and to increase tax morale. However, there is an alternative bottom-up approach that decentralises political power to those who are close to the problems. This paper analyses the relationship with local autonomy. We use data on tax morale at the individual level and macro data on the size of the shadow economy to analyse the relevance of local autonomy and compliance in Switzerland. The findings suggest that there is a positive (negative) relationship between local autonomy and tax morale (size of the shadow economy).