983 resultados para discrete wavelet transforms


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently we have developed a new form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. This introduces limited redundancy (2 m:1 for m-dimensional signals) and allows the transform to provide approximate shift invariance and directionally selective filters (properties lacking in the traditional wavelet transform) while preserving the usual properties of perfect reconstruction and computational efficiency with good well-balanced frequency responses. In this paper we analyse why the new transform can be designed to be shift invariant, and describe how to estimate the accuracy of this approximation and design suitable filters to achieve this.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

基于量化索引调制(QIM)的隐写技术正日益受到隐写分析的威胁。该文将通常在DCT域隐写的做法改为在非均匀DCT域进行,将参数作为密钥,提出了一种NDCT-QIM图像隐写方法。由于在攻击者猜测的域中,嵌入信号具有扩散性,NDCT-QIM方法不利于隐写分析对隐写特征的检测,分析和实验表明,它能够更好地抵御基于梯度能量、直方图及小波统计特征等常用统计量的隐写分析,增强了隐写的隐蔽性。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Offshore seismic exploration is full of high investment and risk. And there are many problems, such as multiple. The technology of high resolution and high S/N ratio on marine seismic data processing is becoming an important project. In this paper, the technology of multi-scale decomposition on both prestack and poststack seismic data based on wavelet and Hilbert-Huang transform and the theory of phase deconvolution is proposed by analysis of marine seismic exploration, investigation and study of literatures, and integration of current mainstream and emerging technology. Related algorithms are studied. The Pyramid algorithm of decomposition and reconstruction had been given by the Mallat algorithm of discrete wavelet transform In this paper, it is introduced into seismic data processing, the validity is shown by test with field data. The main idea of Hilbert-Huang transform is the empirical mode decomposition with which any complicated data set can be decomposed into a finite and often small number of intrinsic mode functions that admit well-behaved Hilbert transform. After the decomposition, a analytical signal is constructed by Hilbert transform, from which the instantaneous frequency and amplitude can be obtained. And then, Hilbert spectrum. This decomposition method is adaptive and highly efficient. Since the decomposition is based on the local characteristics of the time scale of data, it is applicable to nonlinear and non-stationary processes. The phenomenons of fitting overshoot and undershoot and end swings are analyzed in Hilbert-Huang transform. And these phenomenons are eliminated by effective method which is studied in the paper. The technology of multi-scale decomposition on both prestack and poststack seismic data can realize the amplitude preserved processing, enhance the seismic data resolution greatly, and overcome the problem that different frequency components can not restore amplitude properly uniformly in the conventional method. The method of phase deconvolution, which has overcome the minimum phase limitation in traditional deconvolution, approached the base fact well that the seismic wavelet is phase mixed in practical application. And a more reliable result will be given by this method. In the applied research, the high resolution relative amplitude preserved processing result has been obtained by careful analysis and research with the application of the methods mentioned above in seismic data processing in four different target areas of China Sea. Finally, a set of processing flow and method system was formed in the paper, which has been carried on in the application in the actual production process and has made the good progress and the huge economic benefit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An area-efficient high-throughput architecture based on distributed arithmetic is proposed for 3D discrete wavelet transform (DWT). The 3D DWT processor was designed in VHDL and mapped to a Xilinx Virtex-E FPGA. The processor runs up to 85 MHz, which can process the five-level DWT analysis of a 128 x 128 x 128 fMRI volume image in 20 ms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Audio scrambling can be employed to ensure confidentiality in audio distribution. We first describe scrambling for raw audio using the discrete wavelet transform (DWT) first and then focus on MP3 audio scrambling. We perform scrambling based on a set of keys which allows for a set of audio outputs having different qualities. During descrambling, the number of keys provided and the number of rounds of descrambling performed will decide the audio output quality. We also perform scrambling by using multiple keys on the MP3 audio format. With a subset of keys, we can descramble to obtain a low quality audio. However, we can obtain the original quality audio by using all of the keys. Our experiments show that the proposed algorithms are effective, fast, simple to implement while providing flexible control over the progressive quality of the audio output. The security level provided by the scheme is sufficient for protecting MP3 music content.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a novel approach to person verification by fusing face and lip features. Specifically, the face is modeled by the discriminative common vector and the discrete wavelet transform. Our lip features are simple geometric features based on a lip contour, which can be interpreted as multiple spatial widths and heights from a center of mass. In order to combine these features, we consider two simple fusion strategies: data fusion before training and score fusion after training, working with two different face databases. Fusing them together boosts the performance to achieve an equal error rate as low as 0.4% and 0.28%, respectively, confirming that our approach of fusing lips and face is effective and promising.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: Waveform analysis has been used to assess vascular resistance and predict cardiovascular events. We aimed to identify microvascular abnormalities in patients with impaired glucose tolerance (IGT) using ocular waveform analysis. The effects of pioglitazone were also assessed. Methods: Forty patients with IGT and twenty-four controls were studied. Doppler velocity recordings were obtained from the central retinal, ophthalmic and common carotid arteries, and sampled at 200 Hz. A discrete wavelet-based analysis method was employed to quantify waveforms. The resistive index (RI),was also determined. Patients with IGT were randomised to pioglitazone or placebo and measurements repeated after 12 weeks treatment. Results: In the ocular waveforms, significant differences in power spectra were observed in frequency band four (corresponding to frequencies between 6.25 and 12.50 Hz) between groups (p

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During 1990's the Wavelet Transform emerged as an important signal processing tool with potential applications in time-frequency analysis and non-stationary signal processing.Wavelets have gained popularity in broad range of disciplines like signal/image compression, medical diagnostics, boundary value problems, geophysical signal processing, statistical signal processing,pattern recognition,underwater acoustics etc.In 1993, G. Evangelista introduced the Pitch- synchronous Wavelet Transform, which is particularly suited for pseudo-periodic signal processing.The work presented in this thesis mainly concentrates on two interrelated topics in signal processing,viz. the Wavelet Transform based signal compression and the computation of Discrete Wavelet Transform. A new compression scheme is described in which the Pitch-Synchronous Wavelet Transform technique is combined with the popular linear Predictive Coding method for pseudo-periodic signal processing. Subsequently,A novel Parallel Multiple Subsequence structure is presented for the efficient computation of Wavelet Transform. Case studies also presented to highlight the potential applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

International School of Photonics, Cochin University of Science and Technology

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) is a multi sequence medical imaging technique in which stacks of images are acquired with different tissue contrasts. Simultaneous observation and quantitative analysis of normal brain tissues and small abnormalities from these large numbers of different sequences is a great challenge in clinical applications. Multispectral MRI analysis can simplify the job considerably by combining unlimited number of available co-registered sequences in a single suite. However, poor performance of the multispectral system with conventional image classification and segmentation methods makes it inappropriate for clinical analysis. Recent works in multispectral brain MRI analysis attempted to resolve this issue by improved feature extraction approaches, such as transform based methods, fuzzy approaches, algebraic techniques and so forth. Transform based feature extraction methods like Independent Component Analysis (ICA) and its extensions have been effectively used in recent studies to improve the performance of multispectral brain MRI analysis. However, these global transforms were found to be inefficient and inconsistent in identifying less frequently occurred features like small lesions, from large amount of MR data. The present thesis focuses on the improvement in ICA based feature extraction techniques to enhance the performance of multispectral brain MRI analysis. Methods using spectral clustering and wavelet transforms are proposed to resolve the inefficiency of ICA in identifying small abnormalities, and problems due to ICA over-completeness. Effectiveness of the new methods in brain tissue classification and segmentation is confirmed by a detailed quantitative and qualitative analysis with synthetic and clinical, normal and abnormal, data. In comparison to conventional classification techniques, proposed algorithms provide better performance in classification of normal brain tissues and significant small abnormalities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, we compare two different cyclone-tracking algorithms to detect North Atlantic polar lows, which are very intense mesoscale cyclones. Both approaches include spatial filtering, detection, tracking and constraints specific to polar lows. The first method uses digital bandpass-filtered mean sea level pressure (MSLP) fieldsin the spatial range of 200�600 km and is especially designed for polar lows. The second method also uses a bandpass filter but is based on the discrete cosine transforms (DCT) and can be applied to MSLP and vorticity fields. The latter was originally designed for cyclones in general and has been adapted to polar lows for this study. Both algorithms are applied to the same regional climate model output fields from October 1993 to September 1995 produced from dynamical downscaling of the NCEP/NCAR reanalysis data. Comparisons between these two methods show that different filters lead to different numbers and locations of tracks. The DCT is more precise in scale separation than the digital filter and the results of this study suggest that it is more suited for the bandpass filtering of MSLP fields. The detection and tracking parts also influence the numbers of tracks although less critically. After a selection process that applies criteria to identify tracks of potential polar lows, differences between both methods are still visible though the major systems are identified in both.