945 resultados para signal processing program
Resumo:
This paper presents a new approach to develop Field Programmable Analog Arrays (FPAAs),(1) which avoids excessive number of programming elements in the signal path, thus enhancing the performance. The paper also introduces a novel FPAA architecture, devoid of the conventional switching and connection modules. The proposed FPAA is based on simple current mode sub-circuits. An uncompounded methodology has been employed for the programming of the Configurable Analog Cell (CAC). Current mode approach has enabled the operation of the FPAA presented here, over almost three decades of frequency range. We have demonstrated the feasibility of the FPAA by implementing some signal processing functions.
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
Grinding process is usually the last finishing process of a precision component in the manufacturing industries. This process is utilized for manufacturing parts of different materials, so it demands results such as low roughness, dimensional and shape error control, optimum tool-life, with minimum cost and time. Damages on the parts are very expensive since the previous processes and the grinding itself are useless when the part is damaged in this stage. This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 e VC131 steels. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. In each test AE data was analyzed off-line, with results compared to inspection of each workpiece for burn and other metallurgical anomaly. A number of statistical signal processing tools have been evaluated.
Resumo:
In this work a new method is proposed for noise reduction in speech signals in the wavelet domain. The method for signal processing makes use of a transfer function, obtained as a polynomial combination of three processings, denominated operators. The proposed method has the objective of overcoming the deficiencies of the thresholding methods and the effective processing of speech corrupted by real noises. Using the method, two speech signals are processed, contaminated by white noise and colored noises. To verify the quality of the processed signals, two evaluation measures are used: signal to noise ratio (SNR) and perceptual evaluation of speech quality (PESQ).
Resumo:
Programa doctorado: Cibernética y Telecomunicación (2002/2004)
Resumo:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
Resumo:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
This thesis reports on the experimental realization, characterization and application of a novel microresonator design. The so-called “bottle microresonator” sustains whispering-gallery modes in which light fields are confined near the surface of the micron-sized silica structure by continuous total internal reflection. While whispering-gallery mode resonators in general exhibit outstanding properties in terms of both temporal and spatial confinement of light fields, their monolithic design makes tuning of their resonance frequency difficult. This impedes their use, e.g., in cavity quantum electrodynamics (CQED) experiments, which investigate the interaction of single quantum mechanical emitters of predetermined resonance frequency with a cavity mode. In contrast, the highly prolate shape of the bottle microresonators gives rise to a customizable mode structure, enabling full tunability. The thesis is organized as follows: In chapter I, I give a brief overview of different types of optical microresonators. Important quantities, such as the quality factor Q and the mode volume V, which characterize the temporal and spatial confinement of the light field are introduced. In chapter II, a wave equation calculation of the modes of a bottle microresonator is presented. The intensity distribution of different bottle modes is derived and their mode volume is calculated. A brief description of light propagation in ultra-thin optical fibers, which are used to couple light into and out of bottle modes, is given as well. The chapter concludes with a presentation of the fabrication techniques of both structures. Chapter III presents experimental results on highly efficient, nearly lossless coupling of light into bottle modes as well as their spatial and spectral characterization. Ultra-high intrinsic quality factors exceeding 360 million as well as full tunability are demonstrated. In chapter IV, the bottle microresonator in add-drop configuration, i.e., with two ultra-thin fibers coupled to one bottle mode, is discussed. The highly efficient, nearly lossless coupling characteristics of each fiber combined with the resonator's high intrinsic quality factor, enable resonant power transfers between both fibers with efficiencies exceeding 90%. Moreover, the favorable ratio of absorption and the nonlinear refractive index of silica yields optical Kerr bistability at record low powers on the order of 50 µW. Combined with the add-drop configuration, this allows one to route optical signals between the outputs of both ultra-thin fibers, simply by varying the input power, thereby enabling applications in all-optical signal processing. Finally, in chapter V, I discuss the potential of the bottle microresonator for CQED experiments with single atoms. Its Q/V-ratio, which determines the ratio of the atom-cavity coupling rate to the dissipative rates of the subsystems, aligns with the values obtained for state-of-the-art CQED microresonators. In combination with its full tunability and the possibility of highly efficient light transfer to and from the bottle mode, this makes the bottle microresonator a unique tool for quantum optics applications.
Resumo:
We describe a recent offering of a linear systems and signal processing course for third-year electrical and computer engineering students. This course is a pre-requisite for our first digital signal processing course. Students have traditionally viewed linear systems courses as mathematical and extremely difficult. Without compromising the rigor of the required concepts, we strived to make the course fun, with application-based hands-on laboratory projects. These projects can be modified easily to meet specific instructors' preferences. © 2011 IEEE.(17 refs)