951 resultados para time–frequency signal processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This volume is based upon the 2nd IEEE European Workshop on Computer-Intensive Methods in Control and Signal Processing, held in Prague, August 1996.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional chemometrics techniques are augmented with algorithms tailored specifically for the de-noising and analysis of femtosecond duration pulse datasets. The new algorithms provide additional insights on sample responses to broadband excitation and multi-moded propagation phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to evaluate ex vivo the accuracy an electronic apex locator during root canal length determination in primary molars. Methods: One calibrated examiner determined the root canal length in 15 primary molars (total=34 root canals) with different stages of root resorption. Root canal length was measured both visually, with the placement of a K-file 1 mm short of the apical foramen or the apical resorption bevel, and electronically using an electronic apex locator (Digital Signal Processing). Data were analyzed statistically using the intraclass correlation (ICC) test. Results: Comparing the actual and electronic root canal length measurements in the primary teeth showed a high correlation (ICC=0.95) Conclusions: The Digital Signal Processing apex locator is useful and accurate for apex foramen location during root canal length measurement in primary molars. (Pediatr Dent 200937:320-2) Received April 75, 2008 vertical bar Lost Revision August 21, 2008 vertical bar Revision Accepted August 22, 2008

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to evaluate the accuracy of electronic apex locators Digital Signal Processing (DSP) and ProPex, for root canal length determination in primary teeth. Fifteen primary molars (a total of 34 root canals) were divided into two groups: Group I - without physiological resorption (n = 16); and Group II - with physiological resorption (n = 18). The length of each canal was measured by introducing a file until its tip was visible and then it was retracted 1 mm. For electronic measurement, the devices were set to 1 mm short of the apical resorption. The data were analysed statistically using the intraclass correlation coefficient (ICC). Results showed that the ICC was high for both electronic apex locators in all situations - with (ICC: DSP = 0.82 and Propex = 0.89) or without resorption (ICC: DSP = 0.92 and Propex = 0.90). Both apex locators were extremely accurate in determining the working length in primary teeth, both with or without physiological resorption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This masters thesis describes the development of signal processing and patternrecognition in monitoring Parkison’s disease. It involves the development of a signalprocess algorithm and passing it into a pattern recogniton algorithm also. Thesealgorithms are used to determine , predict and make a conclusion on the study ofparkison’s disease. We get to understand the nature of how the parkinson’s disease isin humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new approach to develop Field Programmable Analog Arrays (FPAAs),(1) which avoids excessive number of programming elements in the signal path, thus enhancing the performance. The paper also introduces a novel FPAA architecture, devoid of the conventional switching and connection modules. The proposed FPAA is based on simple current mode sub-circuits. An uncompounded methodology has been employed for the programming of the Configurable Analog Cell (CAC). Current mode approach has enabled the operation of the FPAA presented here, over almost three decades of frequency range. We have demonstrated the feasibility of the FPAA by implementing some signal processing functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grinding process is usually the last finishing process of a precision component in the manufacturing industries. This process is utilized for manufacturing parts of different materials, so it demands results such as low roughness, dimensional and shape error control, optimum tool-life, with minimum cost and time. Damages on the parts are very expensive since the previous processes and the grinding itself are useless when the part is damaged in this stage. This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 e VC131 steels. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. In each test AE data was analyzed off-line, with results compared to inspection of each workpiece for burn and other metallurgical anomaly. A number of statistical signal processing tools have been evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a new method is proposed for noise reduction in speech signals in the wavelet domain. The method for signal processing makes use of a transfer function, obtained as a polynomial combination of three processings, denominated operators. The proposed method has the objective of overcoming the deficiencies of the thresholding methods and the effective processing of speech corrupted by real noises. Using the method, two speech signals are processed, contaminated by white noise and colored noises. To verify the quality of the processed signals, two evaluation measures are used: signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Programa doctorado: Cibernética y Telecomunicación (2002/2004)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.