872 resultados para Signal-subspace compression


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose techniques of optical frequency conversion, pulse compression and signal copying based on a combination of cross-phase modulation using triangular pump pulses and subsequent propagation in a dispersive medium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If the Internet could be used as a method of transmitting ultrasound images taken in the field quickly and effectively, it would bring tertiary consultation to even extremely remote centres. The aim of the study was to evaluate the maximum degree of compression of fetal ultrasound video-recordings that would not compromise signal quality. A digital fetal ultrasound videorecording of 90 s was produced, resulting in a file size of 512 MByte. The file was compressed to 2, 5 and 10 MByte. The recordings were viewed by a panel of four experienced observers who were blinded to the compression ratio used. Using a simple seven-point scoring system, the observers rated the quality of the clip on 17 items. The maximum compression ratio that was considered clinically acceptable was found to be 1:50-1:100. This produced final file sizes of 5-10 MByte, corresponding to a screen size of 320 x 240 pixels, running at 15 frames/s. This study expands the possibilities for providing tertiary perinatal services to the wider community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compression amplification significantly alters the acoustic speech signal in comparison to linear amplification. The central hypothesis of the present study was that the compression settings of a two-channel aid that best preserved the acoustic properties of speech compared to linear amplification would yield the best perceptual results, and that the compression settings that most altered the acoustic properties of speech compared to linear would yield significantly poorer speech perception. On the basis of initial acoustic analysis of the test stimuli recorded through a hearing aid, two different compression amplification settings were chosen for the perceptual study. Participants were 74 adults with mild to moderate sensorineural hearing impairment. Overall, the speech perception results supported the hypothesis. A further aim of the study was to determine if variation in participants' speech perception with compression amplification (compared to linear amplification) could be explained by the individual characteristics of age, degree of loss, dynamic range, temporal resolution, and frequency selectivity; however, no significant relationships were found.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a spatial filtering technique forthe reception of pilot-aided multirate multicode direct-sequencecode division multiple access (DS/CDMA) systems such as widebandCDMA (WCDMA). These systems introduce a code-multiplexedpilot sequence that can be used for the estimation of thefilter weights, but the presence of the traffic signal (transmittedat the same time as the pilot sequence) corrupts that estimationand degrades the performance of the filter significantly. This iscaused by the fact that although the traffic and pilot signals areusually designed to be orthogonal, the frequency selectivity of thechannel degrades this orthogonality at hte receiving end. Here,we propose a semi-blind technique that eliminates the self-noisecaused by the code-multiplexing of the pilot. We derive analyticallythe asymptotic performance of both the training-only andthe semi-blind techniques and compare them with the actual simulatedperformance. It is shown, both analytically and via simulation,that high gains can be achieved with respect to training-onlybasedtechniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerical linear algebra, students encounter earlythe iterative power method, which finds eigenvectors of a matrixfrom an arbitrary starting point through repeated normalizationand multiplications by the matrix itself. In practice, more sophisticatedmethods are used nowadays, threatening to make the powermethod a historical and pedagogic footnote. However, in the contextof communication over a time-division duplex (TDD) multipleinputmultiple-output (MIMO) channel, the power method takes aspecial position. It can be viewed as an intrinsic part of the uplinkand downlink communication switching, enabling estimationof the eigenmodes of the channel without extra overhead. Generalizingthe method to vector subspaces, communication in thesubspaces with the best receive and transmit signal-to-noise ratio(SNR) is made possible. In exploring this intrinsic subspace convergence(ISC), we show that several published and new schemes canbe cast into a common framework where all members benefit fromthe ISC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of selecting anappropriate wavelet filter is always present in signal compression based on thewavelet transform. In this report, we propose a method to select a wavelet filter from a predefined set of filters for the compression of spectra from a multispectral image. The wavelet filter selection is based on the Learning Vector Quantization (LVQ). In the training phase for the test images, the best wavelet filter for each spectrum has been found by a careful compression-decompression evaluation. Certain spectral features are used in characterizing the pixel spectra. The LVQ is used to form the best wavelet filter class for different types of spectra from multispectral images. When a new image is to be compressed, a set of spectra from that image is selected, the spectra are classified by the trained LVQand the filter associated to the largest class is selected for the compression of every spectrum from the multispectral image. The results show, that almost inevery case our method finds the most suitable wavelet filter from the pre-defined set for the compression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multispectral images contain information from several spectral wavelengths and currently multispectral images are widely used in remote sensing and they are becoming more common in the field of computer vision and in industrial applications. Typically, one multispectral image in remote sensing may occupy hundreds of megabytes of disk space and several this kind of images may be received from a single measurement. This study considers the compression of multispectral images. The lossy compression is based on the wavelet transform and we compare the suitability of different waveletfilters for the compression. A method for selecting a wavelet filter for the compression and reconstruction of multispectral images is developed. The performance of the multidimensional wavelet transform based compression is compared to other compression methods like PCA, ICA, SPIHT, and DCT/JPEG. The quality of the compression and reconstruction is measured by quantitative measures like signal-to-noise ratio. In addition, we have developed a qualitative measure, which combines the information from the spatial and spectral dimensions of a multispectral image and which also accounts for the visual quality of the bands from the multispectral images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an improved technique for evolving wavelet coefficients refined for compression and reconstruction of fingerprint images is presented. The FBI fingerprint compression standard [1, 2] uses the cdf 9/7 wavelet filter coefficients. Lifting scheme is an efficient way to represent classical wavelets with fewer filter coefficients [3, 4]. Here Genetic algorithm (GA) is used to evolve better lifting filter coefficients for cdf 9/7 wavelet to compress and reconstruct fingerprint images with better quality. Since the lifting filter coefficients are few in numbers compared to the corresponding classical wavelet filter coefficients, they are evolved at a faster rate using GA. A better reconstructed image quality in terms of Peak-Signal-to-Noise-Ratio (PSNR) is achieved with the best lifting filter coefficients evolved for a compression ratio 16:1. These evolved coefficients perform well for other compression ratios also.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The results from a range of different signal processing schemes used for the further processing of THz transients are contrasted. The performance of different classifiers after adopting these schemes are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existing dual-rate blind linear detectors, which operate at either the low-rate (LR) or the high-rate (HR) mode, are not strictly blind at the HR mode and lack theoretical analysis. This paper proposes the subspace-based LR and HR blind linear detectors, i.e., bad decorrelating detectors (BDD) and blind MMSE detectors (BMMSED), for synchronous DS/CDMA systems. To detect an LR data bit at the HR mode, an effective weighting strategy is proposed. The theoretical analyses on the performance of the proposed detectors are carried out. It has been proved that the bit-error-rate of the LR-BDD is superior to that of the HR-BDD and the near-far resistance of the LR blind linear detectors outperforms that of its HR counterparts. The extension to asynchronous systems is also described. Simulation results show that the adaptive dual-rate BMMSED outperform the corresponding non-blind dual-rate decorrelators proposed by Saquib, Yates and Mandayam (see Wireless Personal Communications, vol. 9, p.197-216, 1998).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This text wants to explore the process of bone remodeling. The idea supported is that the signal, the cells acquire and which suggest them to change in their architectural conformation, is the potential difference on the free boundaries surfaces of collagen fibers. These ones represent the bone in the nanoscale. This work has as subject a multiscale model. Lots of studies have been made to try to discover the relationship between a macroscopic external bone load and the cellular scale. The tree first simulations have been a longitudinal, a flexion and a transversal compression force on a full longitudinal fiber 0-0 sample. The results showed first the great difference between a fully longitudinal stress and a flexion stress. Secondly a decrease in the potential difference has been observed in the transversal force configuration, suggesting that such a signal could be taken as the one, who leads the bone remodeling. To also exclude that the obtained results was not to attribute to a piezoelectric collagen effect and not to a mechanical load, different coupling analyses have been developed. Such analyses show this effect is really less important than the one the mechanical load is responsible of. At this point the work had to explore how bone remodeling could develop. The analyses involved different geometry and fibers percentage. Moreover at the beginning the model was to manually implement. The author, after an initial improvement of it, provided to implement a standalone version thanks to integration between Comsol Multiphysic, Matlab and Excel.