899 resultados para Transformada de Wavelet
Resumo:
Wir betrachten einen zeitlich inhomogenen Diffusionsprozess, der durch eine stochastische Differentialgleichung gegeben wird, deren Driftterm ein deterministisches T-periodisches Signal beinhaltet, dessen Periodizität bekannt ist. Dieses Signal sei in einem Besovraum enthalten. Wir schätzen es mit Hilfe eines nichtparametrischen Waveletschätzers. Unser Schätzer ist von einem Wavelet-Dichteschätzer mit Thresholding inspiriert, der 1996 in einem klassischen iid-Modell von Donoho, Johnstone, Kerkyacharian und Picard konstruiert wurde. Unter gewissen Ergodizitätsvoraussetzungen an den Prozess können wir nichtparametrische Konvergenzraten angegeben, die bis auf einen logarithmischen Term den Raten im klassischen iid-Fall entsprechen. Diese Raten werden mit Hilfe von Orakel-Ungleichungen gezeigt, die auf Ergebnissen über Markovketten in diskreter Zeit von Clémencon, 2001, beruhen. Außerdem betrachten wir einen technisch einfacheren Spezialfall und zeigen einige Computersimulationen dieses Schätzers.
Resumo:
WaveTrack é un'implementazione ottimizzata di un algoritmo di pitch tracking basato su wavelet, nello specifico viene usata la trasformata Fast Lifting Wavelet Transform con la wavelet di Haar. La libreria è stata scritta nel linguaggio C e tra le sue peculiarità può vantare tempi di latenza molto bassi, un'ottima accuratezza e una buona flessibilità d'uso grazie ad alcuni parametri configurabili.
Resumo:
La presente tesi vuole dare una descrizione delle Trasformate Wavelet indirizzata alla codifica dell’immagine in formato JPEG2000. Dopo aver quindi descritto le prime fasi della codifica di un’immagine, procederemo allo studio dei difetti derivanti dall’analisi tramite la Trasformata Discreta del Coseno (utilizzata nel formato predecessore JPEG). Dopo aver quindi descritto l’analisi multirisoluzione e le caratteristiche che la differenziano da quest’ultima, analizzeremo la Trasformata Wavelet dandone solo pochi accenni teorici e cercando di dedurla, in una maniera più indirizzata all’applicazione. Concluderemo la tesi descrivendo la codifica dei coefficienti calcolati, e portando esempi delle innumerevoli applicazioni dell’analisi multirisoluzione nei diversi campi scientifici e di trasmissione dei segnali.
Resumo:
Der technische Fortschritt konfrontiert die medizinische Bildgebung wie keine andere Sparte der Medizin mit einem rasanten Anstieg zu speichernder Daten. Anschaffung, Wartung und Ausbau der nötigen Infrastruktur entwickeln sich zunehmend zu einem ökonomischen Faktor. Ein Verfahren, welches diesem Trend etwas entgegensetzten könnte ist die irreversible Bilddatenkompression. Sie ist seit über 10 Jahren Gegenstand vieler Studien, deren Ergebnisse sich wiederum in Empfehlungen zum Einsatz irreversibler Kompression mehrerer nationaler und internationaler Organisation, wie CAR, DRG, RCR und ESR wiederspiegeln. Tenor dieser Empfehlungen ist, dass der Einsatz von moderater irreversibler Bilddatenkompression sicher und sinnvoll ist. Teil dieser Empfehlungen sind auch Angaben über das Maß an Kompression, ausgedrückt in Kompressionsraten, welche je nach Untersuchung und anatomischer Region als sicher anwendbar gelten und keinen diagnostisch relevanten Verlust der komprimierten Bilder erzeugen.rnVerschiedene Kompressionsalgorithmen wurden vorgeschlagen. Letztendlich haben sich vor allem die beiden weit verbreiteten Algorithmen JPEG und JPEG2000 bewährt. Letzterer erfährt in letzter Zeit zunehmen Anwendung, aufgrund seiner einfacheren Handhabung und seiner umfangreichen Zusatzfunktionen.rnAufgrund rechtlich-ethischer Bedenken hat die irreversible Kompression keine breite praktische Anwendung finden können. Dafür verantwortlich ist unter anderem auch die Unklarheit, wie sich irreversible Kompression auf Nach- und Weiterverarbeitung (sog. Postprocessing) medizinischer Bilder, wie Segmentierung, Volumetrie oder 3D-Darstellung, auswirkt. Bisherige Studien zu diesem Thema umfassen vier verschiedene Postprocessing-Algorithmen. Die untersuchten Algorithmen zeigten sich bei verlustbehafteter Kompression im Bereich der erwähnten, publizierten Kompressionsraten weitgehend unbeeinflusst. Lediglich die computergestützte Messung von Stenosegraden in der digitalen Koronarangiographie kollidiert mit den in Großbritannien geltenden Empfehlungen. Die Verwendung unterschiedlicher Kompressionsalgorithmen schränkt die allgemeinernAussagekraft dieser Studienergebnisse außerdem ein.rnZur Erweiterung der Studienlage wurden vier weitere Nach- und Weiterverarbeitungsalgorithmen auf ihre Kompressionstoleranz untersucht. Dabei wurden die Kompressionsraten von 8:1, 10:1 und 15:1 verwendet, welche um die empfohlenen Kompressionsraten von CAR, DRG, RCR und ESR liegen und so ein praxisnahes Setting bieten. Als Kompressionsalgorithmus wurde JPEG2000 verwendet, aufgrund seiner zunehmenden Nutzung in Studien sowie seiner bereits erwähnten Vorzüge in Sachen Handhabung und Zusatzfunktionen. Die vier Algorithmen umfassten das 3D-Volume rendering von CT-Angiographien der Becken-Bein-Gefäße, die Computer-assistierte Detektion von Lungenrundherden, die automatisierte Volumetrie von Leberrundherden und die funktionelle Bestimmung der Ejektionsfraktion in computertomographischen Aufnahmen des Herzens.rnAlle vier Algorithmen zeigten keinen Einfluss durch irreversibler Bilddatenkompression in denrngewählten Kompressionsraten (8:1, 10:1 und 15:1). Zusammen mit der bestehenden Literatur deuten die Ergebnisse an, dass moderate irreversible Kompression im Rahmen aktueller Empfehlungen keinen Einfluss auf Nach- und Weiterverarbeitung medizinischer Bilder hat. Eine explizitere Vorhersage zu einem bestimmten, noch nicht untersuchten Algorithmus ist jedoch aufgrund der unterschiedlichen Funktionsweisen und Programmierungen nicht sicher möglich.rnSofern ein Postprocessing Algorithmus auf komprimiertes Bildmaterial angewendet werden soll, muss dieser zunächst auf seine Kompressionstoleranz getestet werden. Dabei muss der Test eine rechtlich-ethische Grundlage für den Einsatz des Algorithmus bei komprimiertem Bildmaterial schaffen. Es sind vor allem zwei Optionen denkbar, die Testung institutsintern, eventuell unter Zuhilfenahme von vorgefertigten Bibliotheken, oder die Testung durch den Hersteller des Algorithmus.
Resumo:
Questo elaborato si concentra sullo studio della trasformata di Fourier e della trasformata Wavelet. Nella prima parte della tesi si analizzano gli aspetti fondamentali della trasformata di Fourier. Si definisce poi la trasformata di Fourier su gruppi abeliani finiti, richiamando opportunamente la struttura di tali gruppi. Si mostra che calcolare la trasformata di Fourier nel quoziente richiede un minor numero di operazioni rispetto a calcolarla direttamente nel gruppo di partenza. L'ultima parte dell'elaborato si occupa dello studio delle Wavelet, dette ondine. Viene presentato quindi il sistema di Haar che permette di definire una funzione come serie di funzioni di Haar in alternativa alla serie di Fourier. Si propone poi un vero e proprio metodo per la costruzione delle ondine e si osserva che tale costruzione è strettamente legata all'analisi multirisoluzione. Un ruolo cruciale viene svolto dall'identità di scala, un'identità algebrica che permette di definire certi coefficienti che determinano completamente le ondine. Interviene poi la trasformata di Fourier che riduce la ricerca dei coefficienti sopra citati, alla ricerca di certe funzioni opportune che determinano esplicitamente le Wavelet. Non tutte le scelte di queste funzioni sono accettabili. Ci sono vari approcci, qui viene presentato l'approccio di Ingrid Daubechies. Le Wavelet costituiscono basi per lo spazio di funzioni a quadrato sommabile e sono particolarmente interessanti per la decomposizione dei segnali. Sono quindi in relazione con l'analisi armonica e sono adottate in un gran numero di applicazioni. Spesso sostituiscono la trasformata di Fourier convenzionale.
Resumo:
OBJECTIVE: In ictal scalp electroencephalogram (EEG) the presence of artefacts and the wide ranging patterns of discharges are hurdles to good diagnostic accuracy. Quantitative EEG aids the lateralization and/or localization process of epileptiform activity. METHODS: Twelve patients achieving Engel Class I/IIa outcome following temporal lobe surgery (1 year) were selected with approximately 1-3 ictal EEGs analyzed/patient. The EEG signals were denoised with discrete wavelet transform (DWT), followed by computing the normalized absolute slopes and spatial interpolation of scalp topography associated to detection of local maxima. For localization, the region with the highest normalized absolute slopes at the time when epileptiform activities were registered (>2.5 times standard deviation) was designated as the region of onset. For lateralization, the cerebral hemisphere registering the first appearance of normalized absolute slopes >2.5 times the standard deviation was designated as the side of onset. As comparison, all the EEG episodes were reviewed by two neurologists blinded to clinical information to determine the localization and lateralization of seizure onset by visual analysis. RESULTS: 16/25 seizures (64%) were correctly localized by the visual method and 21/25 seizures (84%) by the quantitative EEG method. 12/25 seizures (48%) were correctly lateralized by the visual method and 23/25 seizures (92%) by the quantitative EEG method. The McNemar test showed p=0.15 for localization and p=0.0026 for lateralization when comparing the two methods. CONCLUSIONS: The quantitative EEG method yielded significantly more seizure episodes that were correctly lateralized and there was a trend towards more correctly localized seizures. SIGNIFICANCE: Coupling DWT with the absolute slope method helps clinicians achieve a better EEG diagnostic accuracy.
Resumo:
Wavelet analysis offers an alternative to Fourier based time-series analysis, and is particularly useful when the amplitudes and periods of dominant cycles are time dependent. We analyse climatic records derived from oxygen isotopic ratios of marine sediment cores with modified Morlet wavelets. We use a normalization of the Morlet wavelets which allows direct correspondence with Fourier analysis. This provides a direct view of the oscillations at various frequencies, and illustrates the nature of the time-dependence of the dominant cycles.
Resumo:
This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scene
Resumo:
Voice biometry is classically based on the parameterization and patterning of speech features mainly. The present approach is based on the characterization of phonation features instead (glottal features). The intention is to reduce intra-speaker variability due to the `text'. Through the study of larynx biomechanics it may be seen that the glottal correlates constitute a family of 2-nd order gaussian wavelets. The methodology relies in the extraction of glottal correlates (the glottal source) which are parameterized using wavelet techniques. Classification and pattern matching was carried out using Gaussian Mixture Models. Data of speakers from a balanced database and NIST SRE HASR2 were used in verification experiments. Preliminary results are given and discussed.
Resumo:
Adaptive embedded systems are required in various applications. This work addresses these needs in the area of adaptive image compression in FPGA devices. A simplified version of an evolution strategy is utilized to optimize wavelet filters of a Discrete Wavelet Transform algorithm. We propose an adaptive image compression system in FPGA where optimized memory architecture, parallel processing and optimized task scheduling allow reducing the time of evolution. The proposed solution has been extensively evaluated in terms of the quality of compression as well as the processing time. The proposed architecture reduces the time of evolution by 44% compared to our previous reports while maintaining the quality of compression unchanged with respect to existing implementations. The system is able to find an optimized set of wavelet filters in less than 2 min whenever the input type of data changes.
Resumo:
Multi-view microscopy techniques such as Light-Sheet Fluorescence Microscopy (LSFM) are powerful tools for 3D + time studies of live embryos in developmental biology. The sample is imaged from several points of view, acquiring a set of 3D views that are then combined or fused in order to overcome their individual limitations. Views fusion is still an open problem despite recent contributions in the field. We developed a wavelet-based multi-view fusion method that, due to wavelet decomposition properties, is able to combine the complementary directional information from all available views into a single volume. Our method is demonstrated on LSFM acquisitions from live sea urchin and zebrafish embryos. The fusion results show improved overall contrast and details when compared with any of the acquired volumes. The proposed method does not need knowledge of the system's point spread function (PSF) and performs better than other existing PSF independent fusion methods.
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.