565 resultados para WAVELET
Resumo:
JPEG2000 és un estàndard de compressió d’imatges que utilitza la transformada wavelet i, posteriorment, una quantificació uniforme dels coeficients amb dead-zone. Els coeficients wavelet presenten certes dependències tant estadístiques com visuals. Les dependències estadístiques es tenen en compte a l'esquema JPEG2000, no obstant, no passa el mateix amb les dependències visuals. En aquest treball, es pretén trobar una representació més adaptada al sistema visual que la que proporciona JPEG2000 directament. Per trobar-la utilitzarem la normalització divisiva dels coeficients, tècnica que ja ha demostrat resultats tant en decorrelació estadística de coeficients com perceptiva. Idealment, el que es voldria fer és reconvertir els coeficients a un espai de valors en els quals un valor més elevat dels coeficients impliqui un valor més elevat d'aportació visual, i utilitzar aquest espai de valors per a codificar. A la pràctica, però, volem que el nostre sistema de codificació estigui integrat a un estàndard. És per això que utilitzarem JPEG2000, estàndard de la ITU que permet una elecció de les distorsions en la codificació, i utilitzarem la distorsió en el domini de coeficients normalitzats com a mesura de distorsió per a escollir quines dades s'envien abans.
Resumo:
There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.
Resumo:
This paper proposes a new methodology to compute Value at Risk (VaR) for quantifying losses in credit portfolios. We approximate the cumulative distribution of the loss function by a finite combination of Haar wavelet basis functions and calculate the coefficients of the approximation by inverting its Laplace transform. The Wavelet Approximation (WA) method is specially suitable for non-smooth distributions, often arising in small or concentrated portfolios, when the hypothesis of the Basel II formulas are violated. To test the methodology we consider the Vasicek one-factor portfolio credit loss model as our model framework. WA is an accurate, robust and fast method, allowing to estimate VaR much more quickly than with a Monte Carlo (MC) method at the same level of accuracy and reliability.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Real-Time implementation of a blind authentication method using self-synchronous speech watermarking
Resumo:
A blind speech watermarking scheme that meets hard real-time deadlines is presented and implemented. In addition, one of the key issues in these block-oriented watermarking techniques is to preserve the synchronization. Namely, to recover the exact position of each block in the mark extract process. In fact, the presented scheme can be split up into two distinguished parts, the synchronization and the information mark methods. The former is embedded into the time domain and it is fast enough to be run meeting real-time requirements. The latter contains the authentication information and it is embedded into the wavelet domain. The synchronization and information mark techniques are both tunable in order to allow a con gurable method. Thus, capacity, transparency and robustness can be con gured depending on the needs. It makes the scheme useful for professional applications, such telephony authentication or even sending information throw radio applications.
Resumo:
Respiratory syncytial virus (RSV) infection is the leading cause of hospitalisation for respiratory diseases among children under 5 years old. The aim of this study was to analyse RSV seasonality in the five distinct regions of Brazil using time series analysis (wavelet and Fourier series) of the following indicators: monthly positivity of the immunofluorescence reaction for RSV identified by virologic surveillance system, and rate of hospitalisations per bronchiolitis and pneumonia due to RSV in children under 5 years old (codes CID-10 J12.1, J20.5, J21.0 and J21.9). A total of 12,501 samples with 11.6% positivity for RSV (95% confidence interval 11 - 12.2), varying between 7.1 and 21.4% in the five Brazilian regions, was analysed. A strong trend for annual cycles with a stable stationary pattern in the five regions was identified through wavelet analysis of the indicators. The timing of RSV activity by Fourier analysis was similar between the two indicators analysed and showed regional differences. This study reinforces the importance of adjusting the immunisation period for high risk population with the monoclonal antibody palivizumab taking into account regional differences in seasonality of RSV.
Resumo:
This work analyzes whether the relationship between risk and returns predicted by the Capital Asset Pricing Model (CAPM) is valid in the Brazilian stock market. The analysis is based on discrete wavelet decomposition on different time scales. This technique allows to analyze the relationship between different time horizons, since the short-term ones (2 to 4 days) up to the long-term ones (64 to 128 days). The results indicate that there is a negative or null relationship between systemic risk and returns for Brazil from 2004 to 2007. As the average excess return of a market portfolio in relation to a risk-free asset during that period was positive, it would be expected this relationship to be positive. That is, higher systematic risk should result in higher excess returns, which did not occur. Therefore, during that period, appropriate compensation for systemic risk was not observed in the Brazilian market. The scales that proved to be most significant to the risk-return relation were the first three, which corresponded to short-term time horizons. When treating differently, year-by-year, and consequently separating positive and negative premiums, some relevance is found, during some years, in the risk/return relation predicted by the CAPM. However, this pattern did not persist throughout the years. Therefore, there is not any evidence strong enough confirming that the asset pricing follows the model.
Resumo:
The aim of this study was to extract multi-parametric measures characterizing different features of sit-to-stand (Si-St) and stand-to-sit (St-Si) transitions in older persons, using a single inertial sensor attached to the chest. Investigated parameters were transition's duration, range of trunk tilt, smoothness of transition pattern assessed by its fractal dimension, and trunk movement's dynamic described by local wavelet energy. A measurement protocol with a Si-St followed by a St-Si postural transition was performed by two groups of participants: the first group (N=79) included Frail Elderly subjects admitted to a post-acute rehabilitation facility and the second group (N=27) were healthy community-dwelling elderly persons. Subjects were also evaluated with Tinetti's POMA scale. Compared to Healthy Elderly persons, frail group at baseline had significantly longer Si-St (3.85±1.04 vs. 2.60±0.32, p=0.001) and St-Si (4.08±1.21 vs. 2.81±0.36, p=0.001) transition's duration. Frail older persons also had significantly decreased smoothness of Si-St transition pattern (1.36±0.07 vs. 1.21±0.05, p=0.001) and dynamic of trunk movement. Measurements after three weeks of rehabilitation in frail older persons showed that smoothness of transition pattern had the highest improvement effect size (0.4) and discriminative performance. These results demonstrate the potential interest of such parameters to distinguish older subjects with different functional and health conditions.
Resumo:
Self-potentials (SP) are sensitive to water fluxes and concentration gradients in both saturated and unsaturated geological media, but quantitative interpretations of SP field data may often be hindered by the superposition of different source contributions and time-varying electrode potentials. Self-potential mapping and close to two months of SP monitoring on a gravel bar were performed to investigate the origins of SP signals at a restored river section of the Thur River in northeastern Switzerland. The SP mapping and subsequent inversion of the data indicate that the SP sources are mainly located in the upper few meters in regions of soil cover rather than bare gravel. Wavelet analyses of the time-series indicate a strong, but non-linear influence of water table and water content variations, as well as rainfall intensity on the recorded SP signals. Modeling of the SP response with respect to an increase in the water table elevation and precipitation indicate that the distribution of soil properties in the vadose zone has a very strong influence. We conclude that the observed SP responses on the gravel bar are more complicated than previously proposed semi-empiric relationships between SP signals and hydraulic head or the thickness of the vadose zone. We suggest that future SP monitoring in restored river corridors should either focus on quantifying vadose zone processes by installing vertical profiles of closely spaced SP electrodes or by installing the electrodes within the river to avoid signals arising from vadose zone processes and time-varying electrochemical conditions in the vicinity of the electrodes.
Resumo:
Ecossistemas marinhos de upwelling são muito heterogêneos e apresentam uma intensa atividade de mesoescala de dimensão de dezenas de quilômetros e submesoescala que variam de centenas de metros até quilômetros dos processos físicos. A importância das estruturas dos processos físicos está na estruturação que eles exercem sob a biomassa de zooplâncton. O presente trabalho está relacionado a um estudo realizado a cabo no Norte do Sistema da Corrente de Humboldt (Peru). Utilizou-se duas variáveis, a profundidade do limite superior da zona de mínimo oxigênio (ZMO) e a biomassa de zooplâncton. É desenvolvida uma metodologia de análise baseada no uso de ondaletas para a identi cação das estruturas dos processos físicos em suas diferentes escalas. O método foi aplicado aos dados de ZMO. Estudos de simula ção mostraram que o método tem a capacidade de identi car as estruturas de interesse, tendo erro de estimação nas bordas do espectro da potência de ondaleta. A tipologia das estruturas identi cadas mostraram que existe três tipos de estruturas, estruturas maiores de mesoescala, duas estruturas pequenas de submesoescala com profundidades diferentes. Outro resultado importante foi que dentro das estruturas pequenas e mais profundas existe maior biomassa de zooplâncton, principalmente nas estruturas de downwelling.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
BACKGROUND: DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. RESULTS: We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. CONCLUSION: VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.
Resumo:
Background: oscillatory activity, which can be separated in background and oscillatory burst pattern activities, is supposed to be representative of local synchronies of neural assemblies. Oscillatory burst events should consequently play a specific functional role, distinct from background EEG activity – especially for cognitive tasks (e.g. working memory tasks), binding mechanisms and perceptual dynamics (e.g. visual binding), or in clinical contexts (e.g. effects of brain disorders). However extracting oscillatory events in single trials, with a reliable and consistent method, is not a simple task. Results: in this work we propose a user-friendly stand-alone toolbox, which models in a reasonable time a bump time-frequency model from the wavelet representations of a set of signals. The software is provided with a Matlab toolbox which can compute wavelet representations before calling automatically the stand-alone application. Conclusion: The tool is publicly available as a freeware at the address: http:// www.bsp.brain.riken.jp/bumptoolbox/toolbox_home.html
Resumo:
BACKGROUND: DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. RESULTS: We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. CONCLUSION: VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data.