69 resultados para signals
Resumo:
This paper describes a method of automated segmentation of speech assuming the signal is continuously time varying rather than the traditional short time stationary model. It has been shown that this representation gives comparable if not marginally better results than the other techniques for automated segmentation. A formulation of the 'Bach' (music semitonal) frequency scale filter-bank is proposed. A comparative study has been made of the performances using Mel, Bark and Bach scale filter banks considering this model. The preliminary results show up to 80 % matches within 20 ms of the manually segmented data, without any information of the content of the text and without any language dependence. 'Bach' filters are seen to marginally outperform the other filters.
Resumo:
Homomorphic analysis and pole-zero modeling of electrocardiogram (ECG) signals are presented in this paper. Four typical ECG signals are considered and deconvolved into their minimum and maximum phase components through cepstral filtering, with a view to study the possibility of more efficient feature selection from the component signals for diagnostic purposes. The complex cepstra of the signals are linearly filtered to extract the basic wavelet and the excitation function. The ECG signals are, in general, mixed phase and hence, exponential weighting is done to aid deconvolution of the signals. The basic wavelet for normal ECG approximates the action potential of the muscle fiber of the heart and the excitation function corresponds to the excitation pattern of the heart muscles during a cardiac cycle. The ECG signals and their components are pole-zero modeled and the pole-zero pattern of the models can give a clue to classify the normal and abnormal signals. Besides, storing only the parameters of the model can result in a data reduction of more than 3:1 for normal signals sampled at a moderate 128 samples/s
Resumo:
We consider the possibility of fingerprinting the presence of heavy additional Z' bosons that arise naturally in extensions of the standard model such as E-6 models and left-right symmetric models, through their mixing with the standard model Z boson. By considering a class of observables including total cross sections, energy distributions and angular distributions of decay leptons we find significant deviation from the standard model predictions for these quantities with right-handed electrons and left-handed positrons at root s= 800GeV. The deviations being less pronounced at smaller centre of mass energies as the models are already tightly constrained. Our work suggests that the ILC should have a strong beam polarization physics program particularly with these configurations. On the other hand, a forward backward asymmetry and lepton fraction in the backward direction are more sensitive to new physics with realistic polarization due to interesting interplay with the neutrino t-channel diagram. This process complements the study of fermion pair production processes that have been considered for discrimination between these models.
Resumo:
The EEG time series has been subjected to various formalisms of analysis to extract meaningful information regarding the underlying neural events. In this paper the linear prediction (LP) method has been used for analysis and presentation of spectral array data for the better visualisation of background EEG activity. It has also been used for signal generation, efficient data storage and transmission of EEG. The LP method is compared with the standard Fourier method of compressed spectral array (CSA) of the multichannel EEG data. The autocorrelation autoregressive (AR) technique is used for obtaining the LP coefficients with a model order of 15. While the Fourier method reduces the data only by half, the LP method just requires the storage of signal variance and LP coefficients. The signal generated using white Gaussian noise as the input to the LP filter has a high correlation coefficient of 0.97 with that of original signal, thus making LP as a useful tool for storage and transmission of EEG. The biological significance of Fourier method and the LP method in respect to the microstructure of neuronal events in the generation of EEG is discussed.
Resumo:
We propose F-norm of the cross-correlation part of the array covariance matrix as a measure of correlation between the impinging signals and study the performance of different decorrelation methods in the broadband case using this measure. We first show that dimensionality of the composite signal subspace, defined as the number of significant eigenvectors of the source sample covariance matrix, collapses in the presence of multipath and the spatial smoothing recovers this dimensionality. Using an upper bound on the proposed measure, we then study the decorrelation of the broadband signals with spatial smoothing and the effect of spacing and directions of the sources on the rate of decorrelation with progressive smoothing. Next, we introduce a weighted smoothing method based on Toeplitz-block-Toeplitz (TBT) structuring of the data covariance matrix which decorrelates the signals much faster than the spatial smoothing. Computer simulations are included to demonstrate the performance of the two methods.
Resumo:
In this letter, we propose a method for blind separation of d co-channel BPSK signals arriving at an antenna array. Our method involves two steps. In the first step, the received data vectors at the output of the array is grouped into 2d clusters. In the second step, we assign the 2d d-tuples with ±1 elements to these clusters in a consistent fashion. From the knowledge of the cluster to which a data vector belongs, we estimate the bits transmitted at that instant. Computer simulations are used to study the performance of our method
Resumo:
In this paper, we have developed a method to compute fractal dimension (FD) of discrete time signals, in the time domain, by modifying the box-counting method. The size of the box is dependent on the sampling frequency of the signal. The number of boxes required to completely cover the signal are obtained at multiple time resolutions. The time resolutions are made coarse by decimating the signal. The loglog plot of total number of boxes required to cover the curve versus size of the box used appears to be a straight line, whose slope is taken as an estimate of FD of the signal. The results are provided to demonstrate the performance of the proposed method using parametric fractal signals. The estimation accuracy of the method is compared with that of Katz, Sevcik, and Higuchi methods. In ddition, some properties of the FD are discussed.
Resumo:
Development of preimplantation embryos and blastocyst implantation are critical early events in the establishment of pregnancy. In primates, embryonic signals, secreted during the peri-implantation period, are believed to play a major role in the regulation of embryonic differentiation and implantation. However, only limited progress has been made in the molecular and functional characterization of embryonic signals, partly due to severe paucity of primate embryos and the lack of optimal culture conditions to obtain viable embryo development. Two embryonic (endocrine) secretions, i.e. chorionic gonadotrophin (CG) and gonadotrophin releasing hormone (GnRH) are being studied. This article reviews the current status of knowledge on the recovery and culture of embryos, their secretion of CG, GnRH and other potential endocrine signals and their regulation and physiological role(s) during the peri-implantation period in primates, including humans.
Resumo:
One of the main disturbances in EEG signals is EMG artefacts generated by muscle movements. In the paper, the use of a linear phase FIR digital low-pass filter with finite wordlength precision coefficients is proposed, designed using the compensation procedure, to minimise EMG artefacts in contaminated EEG signals. To make the filtering more effective, different structures are used, i.e. cascading, twicing and sharpening (apart from simple low-pass filtering) of the designed FIR filter Modifications are proposed to twicing and sharpening structures to regain the linear phase characteristics that are lost in conventional twicing and sharpening operations. The efficacy of all these transformed filters in minimising EMG artefacts is studied, using SNR improvements as a performance measure for simulated signals. Time plots of the signals are also compared. Studies show that the modified sharpening structure is superior in performance to all other proposed methods. These algorithms have also been applied to real or recorded EMG-contaminated EEG signal. Comparison of time plots, and also the output SNR, show that the proposed modified sharpened structure works better in minimising EMG artefacts compared with other methods considered.
Resumo:
We have made a detailed study of the signals expected at CERN LEP 2 from charged scalar bosons whose dominant decay channels are into four fermions. The event rates as well as kinematics of the final states are discussed when such scalars are either pair produced or are generated through a tree-level interaction involving a charged scalar, the W, and the Z. The backgrounds in both cases are discussed. We also suggest the possibility of reconstructing the mass of such a scalar at LEP 2.
Resumo:
The removal of noise and outliers from measurement signals is a major problem in jet engine health monitoring. Topical measurement signals found in most jet engines include low rotor speed, high rotor speed. fuel flow and exhaust gas temperature. Deviations in these measurements from a baseline 'good' engine are often called measurement deltas and the health signals used for fault detection, isolation, trending and data mining. Linear filters such as the FIR moving average filter and IIR exponential average filter are used in the industry to remove noise and outliers from the jet engine measurement deltas. However, the use of linear filters can lead to loss of critical features in the signal that can contain information about maintenance and repair events that could be used by fault isolation algorithms to determine engine condition or by data mining algorithms to learn valuable patterns in the data, Non-linear filters such as the median and weighted median hybrid filters offer the opportunity to remove noise and gross outliers from signals while preserving features. In this study. a comparison of traditional linear filters popular in the jet engine industry is made with the median filter and the subfilter weighted FIR median hybrid (SWFMH) filter. Results using simulated data with implanted faults shows that the SWFMH filter results in a noise reduction of over 60 per cent compared to only 20 per cent for FIR filters and 30 per cent for IIR filters. Preprocessing jet engine health signals using the SWFMH filter would greatly improve the accuracy of diagnostic systems. (C) 2002 Published by Elsevier Science Ltd.
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.
Resumo:
While wireless LAN (WLAN) is very popular now a days, its performance deteriorates in the presence of other signals like Bluetooth (BT) signal that operate in the same band as WLAN. Present interference mitigation techniques in WLAN due to BT cancel interference in WLAN sub carrier where BT has hopped but do not cancel interference in the adjacent sub carriers. In this paper BT interference signal in all the OFDM sub carriers is estimated. That is, leakage of BT in other sub carriers including the sub carriers in which it has hopped is also measured. BT signals are estimated using the training signals of OFDM system. Simulation results in AWGN noise show that proposed algorithm agrees closely with theoretical results.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
The resolution of the digital signal path has a crucial impact on the design, performance and the power dissipation of the radio receiver data path, downstream from the ADC. The ADC quantization noise has been traditionally included with the Front End receiver noise in calculating the SNR as well as BER for the receiver. Using the IEEE 802.15.4 as an example, we show that this approach leads to an over-design for the ADC and the digital signal path, resulting in larger power. More accurate specifications for the front-end design can be obtained by making SNRreg a function of signal resolutions. We show that lower resolution signals provide adequate performance and quantization noise alone does not produce any bit-error. We find that a tight bandpass filter preceding the ADC can relax the resolution requirement and a 1-bit ADC degrades SNR by only 1.35 dB compared to 8-bit ADC. Signal resolution has a larger impact on the synchronization and a 1-bit ADC costs about 5 dB in SNR to maintain the same level of performance as a 8-bit ADC.