872 resultados para discrete wavelet transform
Resumo:
Un dels principals problemes quan es realitza un anàlisi de contorns és la gran quantitat de dades implicades en la descripció de la figura. Per resoldre aquesta problemàtica, s’aplica la parametrització que consisteix en obtenir d’un contorn unes dades representatives amb els mínims coeficients possibles, a partir dels quals es podrà reconstruir de nou sense pèrdues molt evidents d’informació. En figures de contorns tancats, la parametrització més estudiada és l’aplicació de la transformada discreta de Fourier (DFT). Aquesta s’aplica a la seqüència de valors que descriu el comportament de les coordenades x i y al llarg de tots els punts que formen el traç. A diferència, en els contorns oberts no es pot aplicar directament la DFT ja que per fer-ho es necessita que el valor de x i de y siguin iguals tan en el primer punt del contorn com en l’últim. Això és degut al fet que la DFT representa sense error senyals periòdics. Si els senyals no acaben en el mateix punt, representa que hi ha una discontinuïtat i apareixen oscil·lacions a la reconstrucció. L’objectiu d’aquest treball és parametritzar contorns oberts amb la mateixa eficiència que s’obté en la parametrització de contorns tancats. Per dur-ho a terme, s’ha dissenyat un programa que permet aplicar la DFT en contorns oberts mitjançant la modificació de les seqüencies de x i y. A més a més, també utilitzant el programari Matlab s’han desenvolupat altres aplicacions que han permès veure diferents aspectes sobre la parametrització i com es comporten els Descriptors El·líptics de Fourier (EFD). Els resultats obtinguts han demostrat que l’aplicació dissenyada permet la parametrització de contorns oberts amb compressions òptimes, fet que facilitarà l’anàlisi quantitatiu de formes en camps com l’ecologia, medicina, geografia, entre d’altres.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Many audio watermarking schemes divide the audio signal into several blocks such that part of the watermark is embedded into each of them. One of the key issues in these block-oriented watermarking schemes is to preserve the synchronisation, i.e. to recover the exact position of each block in the mark recovery process. In this paper, a novel time domain synchronisation technique is presented together with a new blind watermarking scheme which works in the Discrete Fourier Transform (DFT or FFT) domain. The combined scheme provides excellent imperceptibility results whilst achieving robustness against typical attacks. Furthermore, the execution of the scheme is fast enough to be used in real-time applications. The excellent transparency of the embedding algorithm makes it particularly useful for professional applications, such as the embedding of monitoring information in broadcast signals. The scheme is also compared with some recent results of the literature.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
VariScan is a software package for the analysis of DNA sequence polymorphisms at the whole genome scale. Among other features, the software:(1) can conduct many population genetic analyses; (2) incorporates a multiresolution wavelet transform-based method that allows capturing relevant information from DNA polymorphism data; and (3) it facilitates the visualization of the results in the most commonly used genome browsers.
Resumo:
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The study is related to lossless compression of greyscale images. The goal of the study was to combine two techniques of lossless image compression, i.e. Integer Wavelet Transform and Differential Pulse Code Modulation to attain better compression ratio. This is an experimental study, where we implemented Integer Wavelet Transform, Differential Pulse Code Modulation and an optimized predictor model using Genetic Algorithm. This study gives encouraging results for greyscale images. We achieved a better compression ration in term of entropy for experiments involving quadrant of transformed image and using optimized predictor coefficients from Genetic Algorithm. In an other set of experiments involving whole image, results are encouraging and opens up many areas for further research work like implementing Integer Wavelet Transform on multiple levels and finding optimized predictor at local levels.
Resumo:
Electricity spot prices have always been a demanding data set for time series analysis, mostly because of the non-storability of electricity. This feature, making electric power unlike the other commodities, causes outstanding price spikes. Moreover, the last several years in financial world seem to show that ’spiky’ behaviour of time series is no longer an exception, but rather a regular phenomenon. The purpose of this paper is to seek patterns and relations within electricity price outliers and verify how they affect the overall statistics of the data. For the study techniques like classical Box-Jenkins approach, series DFT smoothing and GARCH models are used. The results obtained for two geographically different price series show that patterns in outliers’ occurrence are not straightforward. Additionally, there seems to be no rule that would predict the appearance of a spike from volatility, while the reverse effect is quite prominent. It is concluded that spikes cannot be predicted based only on the price series; probably some geographical and meteorological variables need to be included in modeling.
Resumo:
Un dels principals problemes quan es realitza un anàlisi de contorns és la gran quantitat de dades implicades en la descripció de la figura. Per resoldre aquesta problemàtica, s’aplica la parametrització que consisteix en obtenir d’un contorn unes dades representatives amb els mínims coeficients possibles, a partir dels quals es podrà reconstruir de nou sense pèrdues molt evidents d’informació. En figures de contorns tancats, la parametrització més estudiada és l’aplicació de la transformada discreta de Fourier (DFT). Aquesta s’aplica a la seqüència de valors que descriu el comportament de les coordenades x i y al llarg de tots els punts que formen el traç. A diferència, en els contorns oberts no es pot aplicar directament la DFT ja que per fer-ho es necessita que el valor de x i de y siguin iguals tan en el primer punt del contorn com en l’últim. Això és degut al fet que la DFT representa sense error senyals periòdics. Si els senyals no acaben en el mateix punt, representa que hi ha una discontinuïtat i apareixen oscil·lacions a la reconstrucció. L’objectiu d’aquest treball és parametritzar contorns oberts amb la mateixa eficiència que s’obté en la parametrització de contorns tancats. Per dur-ho a terme, s’ha dissenyat un programa que permet aplicar la DFT en contorns oberts mitjançant la modificació de les seqüencies de x i y. A més a més, també utilitzant el programari Matlab s’han desenvolupat altres aplicacions que han permès veure diferents aspectes sobre la parametrització i com es comporten els Descriptors El·líptics de Fourier (EFD). Els resultats obtinguts han demostrat que l’aplicació dissenyada permet la parametrització de contorns oberts amb compressions òptimes, fet que facilitarà l’anàlisi quantitatiu de formes en camps com l’ecologia, medicina, geografia, entre d’altres.
Resumo:
The aim of the present study is to understand the characteristics and properties of different wave modes and the vertical circulation pattern in the troposphere and lower stratosphere over Indian region using data obtained from the Indian Mesosphere-Stratosphere Troposphere (MST) radar, National Center for Environmental Prediction/National Centres of Atmospheric Research (NCEP/NCAR) reanalysed data and radiosonde observations.Studies on the vertical motion in monsoon Hadley circulation are carried out and the results are discussed . From the analysis of MST radar data, an overall picture of vertical motion of air over Indian region is explained and noted that there exists sinking motion both during winter and summer. Besides, the study shows that there is an anomalous northerly wind in the troposphere over the southern peninsular region during southwest monsoon season.The outcome of the study on intrusion of mid-latitude upper tropospheric trough and associated synoptic-scale vertical velocity over the tropical Indian latitudes are reported and discussed . It shows that there is interaction between north Indian latitudes and tropical easterly region, when there is an eastward movement of Western Disturbance across the country. It explains the strengthening of westerlies and a change of winter westerlies into easterlies in the tropical troposphere and lower stratosphere. The divergence field computed over the MST radar station shows intensification in the downward motion in association with the synoptic systems of the northwest Indian region.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Resumo:
Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.
Resumo:
Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.