958 resultados para rail wheel flat, vibration monitoring, wavelet approaches, daubechies wavelets, signal processing
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Synthetic-heterodyne demodulation is a useful technique for dynamic displacement and velocity detection in interferometric sensors, as it can provide an output signal that is immune to interferometric drift. With the advent of cost-effective, high-speed real-time signal-processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. In synthetic heterodyne, to obtain the actual dynamic displacement or vibration of the object under test requires knowledge of the interferometer visibility and also the argument of two Bessel functions. In this paper, a method is described for determining the former and setting the Bessel function argument to a set value, which ensures maximum sensitivity. Conventional synthetic-heterodyne demodulation requires the use of two in-phase local oscillators; however, the relative phase of these oscillators relative to the interferometric signal is unknown. It is shown that, by using two additional quadrature local oscillators, a demodulated signal can be obtained that is independent of this phase difference. The experimental interferometer is aMichelson configuration using a visible single-mode laser, whose current is sinusoidally modulated at a frequency of 20 kHz. The detected interferometer output is acquired using a 250 kHz analog-to-digital converter and processed in real time. The system is used to measure the displacement sensitivity frequency response and linearity of a piezoelectric mirror shifter over a range of 500 Hz to 10 kHz. The experimental results show good agreement with two data-obtained independent techniques: the signal coincidence and denominated n-commuted Pernick method.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Nowadays, the attainment of microsystems that integrate most of the stages involved in an analytical process has raised an enormous interest in several research fields. This approach provides experimental set-ups of increased robustness and reliability, which simplify their application to in-line and continuous biomedical and environmental monitoring. In this work, a novel, compact and autonomous microanalyzer aimed at multiwavelength colorimetric determinations is presented. It integrates the microfluidics (a three-dimensional mixer and a 25 mm length "Z-shape" optical flow-cell), a highly versatile multiwavelength optical detection system and the associated electronics for signal processing and drive, all in the same device. The flexibility provided by its design allows the microanalyzer to be operated either in single fixed mode to provide a dedicated photometer or in multiple wavelength mode to obtain discrete pseudospectra. To increase its reliability, automate its operation and allow it to work under unattended conditions, a multicommutation sub-system was developed and integrated with the experimental set-up. The device was initially evaluated in the absence of chemical reactions using four acidochromic dyes and later applied to determine some key environmental parameters such as phenol index, chromium(VI) and nitrite ions. Results were comparable with those obtained with commercial instrumentation and allowed to demonstrate the versatility of the proposed microanalyzer as an autonomous and portable device able to be applied to other analytical methodologies based on colorimetric determinations.
Resumo:
Monitoring foetal health is a very important task in clinical practice to appropriately plan pregnancy management and delivery. In the third trimester of pregnancy, ultrasound cardiotocography is the most employed diagnostic technique: foetal heart rate and uterine contractions signals are simultaneously recorded and analysed in order to ascertain foetal health. Because ultrasound cardiotocography interpretation still lacks of complete reliability, new parameters and methods of interpretation, or alternative methodologies, are necessary to further support physicians’ decisions. To this aim, in this thesis, foetal phonocardiography and electrocardiography are considered as different techniques. Further, variability of foetal heart rate is thoroughly studied. Frequency components and their modifications can be analysed by applying a time-frequency approach, for a distinct understanding of the spectral components and their change over time related to foetal reactions to internal and external stimuli (such as uterine contractions). Such modifications of the power spectrum can be a sign of autonomic nervous system reactions and therefore represent additional, objective information about foetal reactivity and health. However, some limits of ultrasonic cardiotocography still remain, such as in long-term foetal surveillance, which is often recommendable mainly in risky pregnancies. In these cases, the fully non-invasive acoustic recording, foetal phonocardiography, through maternal abdomen, represents a valuable alternative to the ultrasonic cardiotocography. Unfortunately, the so recorded foetal heart sound signal is heavily loaded by noise, thus the determination of the foetal heart rate raises serious signal processing issues. A new algorithm for foetal heart rate estimation from foetal phonocardiographic recordings is presented in this thesis. Different filtering and enhancement techniques, to enhance the first foetal heart sounds, were applied, so that different signal processing techniques were implemented, evaluated and compared, by identifying the strategy characterized on average by the best results. In particular, phonocardiographic signals were recorded simultaneously to ultrasonic cardiotocographic signals in order to compare the two foetal heart rate series (the one estimated by the developed algorithm and the other provided by cardiotocographic device). The algorithm performances were tested on phonocardiographic signals recorded on pregnant women, showing reliable foetal heart rate signals, very close to the ultrasound cardiotocographic recordings, considered as reference. The algorithm was also tested by using a foetal phonocardiographic recording simulator developed and presented in this research thesis. The target was to provide a software for simulating recordings relative to different foetal conditions and recordings situations and to use it as a test tool for comparing and assessing different foetal heart rate extraction algorithms. Since there are few studies about foetal heart sounds time characteristics and frequency content and the available literature is poor and not rigorous in this area, a data collection pilot study was also conducted with the purpose of specifically characterising both foetal and maternal heart sounds. Finally, in this thesis, the use of foetal phonocardiographic and electrocardiographic methodology and their combination, are presented in order to detect foetal heart rate and other functioning anomalies. The developed methodologies, suitable for longer-term assessment, were able to detect heart beat events correctly, such as first and second heart sounds and QRS waves. The detection of such events provides reliable measures of foetal heart rate, potentially information about measurement of the systolic time intervals and foetus circulatory impedance.
Resumo:
Structural Health Monitoring (SHM) is the process of characterization for existing civil structures that proposes for damage detection and structural identification. It's based firstly on the collection of data that are inevitably affected by noise. In this work a procedure to denoise the measured acceleration signal is proposed, based on EMD-thresholding techniques. Moreover the velocity and displacement responses are estimated, starting from measured acceleration.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
In der vorliegenden Arbeit wurden Struktur-Eigenschaftsbeziehungen des konjugierten Modell-Polymers MEH-PPV untersucht. Dazu wurde Fällungs-fraktionierung eingesetzt, um MEH-PPV mit unterschiedlichem Molekulargewicht (Mw) zu erhalten, insbesondere MEH-PPV mit niedrigem Mw, da dieses für optische Wellenleiterbauelemente optimal geeignet ist Wir konnten feststellen, dass die Präparation einer ausreichenden Menge von MEH-PPV mit niedrigem Mw und geringer Mw-Verteilung wesentlich von der geeigneten Wahl des Lösungsmittels und der Temperatur während der Zugabe des Fällungsmittels abhängt. Alternativ dazu wurden UV-induzierte Kettenspaltungseffekte untersucht. Wir folgern aus dem Vergleich beider Vorgehensweisen, dass die Fällungsfraktionierung verglichen mit der UV-Behandlung besser geeignet ist zur Herstellung von MEH-PPV mit spezifischem Mw, da das UV-Licht Kettendefekte längs des Polymerrückgrats erzeugt. 1H NMR and FTIR Spektroskopie wurden zur Untersuchung dieser Kettendefekte herangezogen. Wir konnten außerdem beobachten, dass die Wellenlängen der Absorptionsmaxima der MEH-PPV Fraktionen mit der Kettenlänge zunehmen bis die Zahl der Wiederholeinheiten n 110 erreicht ist. Dieser Wert ist signifikant größer als früher berichtet. rnOptische Eigenschaften von MEH-PPV Wellenleitern wurden untersucht und es konnte gezeigt werden, dass sich die optischen Konstanten ausgezeichnet reproduzieren lassen. Wir haben die Einflüsse der Lösungsmittel und Temperatur beim Spincoaten auf Schichtdicke, Oberflächenrauigkeit, Brechungsindex, Doppelbrechung und Wellenleiter-Dämpfungsverlust untersucht. Wir fanden, dass mit der Erhöhung der Siedetemperatur der Lösungsmittel die Schichtdicke und die Rauigkeit kleiner werden, während Brechungsindex, Doppelbrechung sowie Wellenleiter-Dämpfungsverluste zunahmen. Wir schließen daraus, dass hohe Siedetemperaturen der Lösungsmittel niedrige Verdampfungsraten erzeugen, was die Aggregatbildung während des Spincoatings begünstigt. Hingegen bewirkt eine erhöhte Temperatur während der Schichtpräparation eine Erhöhung von Schichtdicke und Rauhigkeit. Jedoch nehmen Brechungsindex und der Doppelbrechung dabei ab.rn Für die Schichtpräparation auf Glassubstraten und Quarzglas-Fasern kam das Dip-Coating Verfahren zum Einsatz. Die Schichtdicke der Filme hängt ab von Konzentration der Lösung, Transfergeschwindigkeit und Immersionszeit. Mit Tauchbeschichtung haben wir Schichten von MEH-PPV auf Flaschen-Mikroresonatoren aufgebracht zur Untersuchung von rein-optischen Schaltprozessen. Dieses Verfahren erweist sich insbesondere für MEH-PPV mit niedrigem Mw als vielversprechend für die rein-optische Signalverarbeitung mit großer Bandbreite.rn Zusätzlich wurde auch die Morphologie dünner Schichten aus anderen PPV-Derivaten mit Hilfe von FTIR Spektroskopie untersucht. Wir konnten herausfinden, dass der Alkyl-Substitutionsgrad einen starken Einfluss auf die mittlere Orientierung der Polymerrückgrate in dünnen Filmen hat.rn
Resumo:
We present a new method for the enhancement of speech. The method is designed for scenarios in which targeted speaker enrollment as well as system training within the typical noise environment are feasible. The proposed procedure is fundamentally different from most conventional and state-of-the-art denoising approaches. Instead of filtering a distorted signal we are resynthesizing a new “clean” signal based on its likely characteristics. These characteristics are estimated from the distorted signal. A successful implementation of the proposed method is presented. Experiments were performed in a scenario with roughly one hour of clean speech training data. Our results show that the proposed method compares very favorably to other state-of-the-art systems in both objective and subjective speech quality assessments. Potential applications for the proposed method include jet cockpit communication systems and offline methods for the restoration of audio recordings.
Resumo:
Reflected at any level of organization of the central nervous system, most of the processes ranging from ion channels to neuronal networks occur in a closed loop, where the input to the system depends on its output. In contrast, most in vitro preparations and experimental protocols operate autonomously, and do not depend on the output of the studied system. Thanks to the progress in digital signal processing and real-time computing, it is now possible to artificially close the loop and investigate biophysical processes and mechanisms under increased realism. In this contribution, we review some of the most relevant examples of a new trend in in vitro electrophysiology, ranging from the use of dynamic-clamp to multi-electrode distributed feedback stimulation. We are convinced these represents the beginning of new frontiers for the in vitro investigation of the brain, promising to open the still existing borders between theoretical and experimental approaches while taking advantage of cutting edge technologies.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.