899 resultados para Transformada de Wavelet
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
Contamination of the electroencephalogram (EEG) by artifacts greatly reduces the quality of the recorded signals. There is a need for automated artifact removal methods. However, such methods are rarely evaluated against one another via rigorous criteria, with results often presented based upon visual inspection alone. This work presents a comparative study of automatic methods for removing blink, electrocardiographic, and electromyographic artifacts from the EEG. Three methods are considered; wavelet, blind source separation (BSS), and multivariate singular spectrum analysis (MSSA)-based correction. These are applied to data sets containing mixtures of artifacts. Metrics are devised to measure the performance of each method. The BSS method is seen to be the best approach for artifacts of high signal to noise ratio (SNR). By contrast, MSSA performs well at low SNRs but at the expense of a large number of false positive corrections.
Resumo:
A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.
Resumo:
We utilized an ecosystem process model (SIPNET, simplified photosynthesis and evapotranspiration model) to estimate carbon fluxes of gross primary productivity and total ecosystem respiration of a high-elevation coniferous forest. The data assimilation routine incorporated aggregated twice-daily measurements of the net ecosystem exchange of CO2 (NEE) and satellite-based reflectance measurements of the fraction of absorbed photosynthetically active radiation (fAPAR) on an eight-day timescale. From these data we conducted a data assimilation experiment with fifteen different combinations of available data using twice-daily NEE, aggregated annual NEE, eight-day f AP AR, and average annual fAPAR. Model parameters were conditioned on three years of NEE and fAPAR data and results were evaluated to determine the information content from the different combinations of data streams. Across the data assimilation experiments conducted, model selection metrics such as the Bayesian Information Criterion and Deviance Information Criterion obtained minimum values when assimilating average annual fAPAR and twice-daily NEE data. Application of wavelet coherence analyses showed higher correlations between measured and modeled fAPAR on longer timescales ranging from 9 to 12 months. There were strong correlations between measured and modeled NEE (R2, coefficient of determination, 0.86), but correlations between measured and modeled eight-day fAPAR were quite poor (R2 = −0.94). We conclude that this inability to determine fAPAR on eight-day timescale would improve with the considerations of the radiative transfer through the plant canopy. Modeled fluxes when assimilating average annual fAPAR and annual NEE were comparable to corresponding results when assimilating twice-daily NEE, albeit at a greater uncertainty. Our results support the conclusion that for this coniferous forest twice-daily NEE data are a critical measurement stream for the data assimilation. The results from this modeling exercise indicate that for this coniferous forest, average annuals for satellite-based fAPAR measurements paired with annual NEE estimates may provide spatial detail to components of ecosystem carbon fluxes in proximity of eddy covariance towers. Inclusion of other independent data streams in the assimilation will also reduce uncertainty on modeled values.
Resumo:
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(h
Resumo:
This continuing study of intragroup light in compact groups of galaxies aims to establish new constraints to models of formation and evolution of galaxy groups, specially of compact groups, which are a key part in the evolution of larger structures, such as clusters. In this paper we present three additional groups (HCG 15, 35 and 51) using deep wide-field B- and R-band images observed with the LAICA camera at the 3.5-m telescope at the Calar Alto observatory (CAHA). This instrument provides us with very stable flat-fielding, a mandatory condition for reliably measuring intragroup diffuse light. The images were analysed with the OV_WAV package, a wavelet technique that allows us to uncover the intragroup component in an unprecedented way. We have detected that 19, 15 and 26 per cent of the total light of HCG 15, 35 and 51, respectively, are in the diffuse component, with colours that are compatible with old stellar populations and with mean surface brightness that can be its low as 28.4 B mag arcsec(-2). Dynamical masses, crossing times and mass-to-light ratios were recalculated using the new group parameters. Also tidal features were analysed using the wavelet technique.
Resumo:
In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the first nontrivial eigenfunction of the Laplace-Beltrami operator to recursively decompose the surface. For this reason we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and nearly equi-areal surface patches, and noise robustness. We show how the evenly distributed patches can be exploited for generating multiresolution high quality uniform meshes. Additionally, our decomposition permits a natural means for carrying out wavelet methods, resulting in an intuitive method for producing feature-sensitive meshes at multiple scales. Published by Elsevier Ltd.
Resumo:
Texture is one of the most important visual attributes for image analysis. It has been widely used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has recently been proposed as an approach for texture analysis with promising results. This approach uses walkers (called tourists) to exploit the gray scale image contexts in several levels. Here, we present an approach to generate graphs out of the trajectories produced by the tourist walks. The generated graphs embody important characteristics related to tourist transitivity in the image. Computed from these graphs, the statistical position (degree mean) and dispersion (entropy of two vertices with the same degree) measures are used as texture descriptors. A comparison with traditional texture analysis methods is performed to illustrate the high performance of this novel approach. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This article presents a novel method of plant classification using Gabor wavelet filters to extract texture filters in a foliar surface. The aim of this promising method is to add to the results obtained by other leaf attributes (such as shape, contour, color, among others), increasing, therefore, the percentage of classification of plant species. To corroborate the efficiency of the technique, an experiment using 20 species from Brazilian flora was done and discussed. The results are also compared with texture Fourier descriptors and cooccurrence matrices. (C) 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 236-243, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20201
Resumo:
In this paper, a novel statistical test is introduced to compare two locally stationary time series. The proposed approach is a Wald test considering time-varying autoregressive modeling and function projections in adequate spaces. The covariance structure of the innovations may be also time- varying. In order to obtain function estimators for the time- varying autoregressive parameters, we consider function expansions in splines and wavelet bases. Simulation studies provide evidence that the proposed test has a good performance. We also assess its usefulness when applied to a financial time series.
Resumo:
In this work an efficient third order non-linear finite difference scheme for solving adaptively hyperbolic systems of one-dimensional conservation laws is developed. The method is based oil applying to the solution of the differential equation an interpolating wavelet transform at each time step, generating a multilevel representation for the solution, which is thresholded and a sparse point representation is generated. The numerical fluxes obtained by a Lax-Friedrichs flux splitting are evaluated oil the sparse grid by an essentially non-oscillatory (ENO) approximation, which chooses the locally smoothest stencil among all the possibilities for each point of the sparse grid. The time evolution of the differential operator is done on this sparse representation by a total variation diminishing (TVD) Runge-Kutta method. Four classical examples of initial value problems for the Euler equations of gas dynamics are accurately solved and their sparse solutions are analyzed with respect to the threshold parameters, confirming the efficiency of the wavelet transform as an adaptive grid generation technique. (C) 2008 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Condition monitoring of wooden railway sleepers applications are generallycarried out by visual inspection and if necessary some impact acoustic examination iscarried out intuitively by skilled personnel. In this work, a pattern recognition solutionhas been proposed to automate the process for the achievement of robust results. Thestudy presents a comparison of several pattern recognition techniques together withvarious nonstationary feature extraction techniques for classification of impactacoustic emissions. Pattern classifiers such as multilayer perceptron, learning cectorquantization and gaussian mixture models, are combined with nonstationary featureextraction techniques such as Short Time Fourier Transform, Continuous WaveletTransform, Discrete Wavelet Transform and Wigner-Ville Distribution. Due to thepresence of several different feature extraction and classification technqies, datafusion has been investigated. Data fusion in the current case has mainly beeninvestigated on two levels, feature level and classifier level respectively. Fusion at thefeature level demonstrated best results with an overall accuracy of 82% whencompared to the human operator.
Resumo:
Objective: To investigate whether spirography-based objective measures are able to effectively characterize the severity of unwanted symptom states (Off and dyskinesia) and discriminate them from motor state of healthy elderly subjects. Background: Sixty-five patients with advanced Parkinson’s disease (PD) and 10 healthy elderly (HE) subjects performed repeated assessments of spirography, using a touch screen telemetry device in their home environments. On inclusion, the patients were either treated with levodopa-carbidopa intestinal gel or were candidates for switching to this treatment. On each test occasion, the subjects were asked trace a pre-drawn Archimedes spiral shown on the screen, using an ergonomic pen stylus. The test was repeated three times and was performed using dominant hand. A clinician used a web interface which animated the spiral drawings, allowing him to observe different kinematic features, like accelerations and spatial changes, during the drawing process and to rate different motor impairments. Initially, the motor impairments of drawing speed, irregularity and hesitation were rated on a 0 (normal) to 4 (extremely severe) scales followed by marking the momentary motor state of the patient into 2 categories that is Off and Dyskinesia. A sample of spirals drawn by HE subjects was randomly selected and used in subsequent analysis. Methods: The raw spiral data, consisting of stylus position and timestamp, were processed using time series analysis techniques like discrete wavelet transform, approximate entropy and dynamic time warping in order to extract 13 quantitative measures for representing meaningful motor impairment information. A principal component analysis (PCA) was used to reduce the dimensions of the quantitative measures into 4 principal components (PC). In order to classify the motor states into 3 categories that is Off, HE and dyskinesia, a logistic regression model was used as a classifier to map the 4 PCs to the corresponding clinically assigned motor state categories. A stratified 10-fold cross-validation (also known as rotation estimation) was applied to assess the generalization ability of the logistic regression classifier to future independent data sets. To investigate mean differences of the 4 PCs across the three categories, a one-way ANOVA test followed by Tukey multiple comparisons was used. Results: The agreements between computed and clinician ratings were very good with a weighted area under the receiver operating characteristic curve (AUC) coefficient of 0.91. The mean PC scores were different across the three motor state categories, only at different levels. The first 2 PCs were good at discriminating between the motor states whereas the PC3 was good at discriminating between HE subjects and PD patients. The mean scores of PC4 showed a trend across the three states but without significant differences. The Spearman’s rank correlations between the first 2 PCs and clinically assessed motor impairments were as follows: drawing speed (PC1, 0.34; PC2, 0.83), irregularity (PC1, 0.17; PC2, 0.17), and hesitation (PC1, 0.27; PC2, 0.77). Conclusions: These findings suggest that spirography-based objective measures are valid measures of spatial- and time-dependent deficits and can be used to distinguish drug-related motor dysfunctions between Off and dyskinesia in PD. These measures can be potentially useful during clinical evaluation of individualized drug-related complications such as over- and under-medications thus maximizing the amount of time the patients spend in the On state.
Resumo:
Background: Voice processing in real-time is challenging. A drawback of previous work for Hypokinetic Dysarthria (HKD) recognition is the requirement of controlled settings in a laboratory environment. A personal digital assistant (PDA) has been developed for home assessment of PD patients. The PDA offers sound processing capabilities, which allow for developing a module for recognition and quantification HKD. Objective: To compose an algorithm for assessment of PD speech severity in the home environment based on a review synthesis. Methods: A two-tier review methodology is utilized. The first tier focuses on real-time problems in speech detection. In the second tier, acoustics features that are robust to medication changes in Levodopa-responsive patients are investigated for HKD recognition. Keywords such as Hypokinetic Dysarthria , and Speech recognition in real time were used in the search engines. IEEE explorer produced the most useful search hits as compared to Google Scholar, ELIN, EBRARY, PubMed and LIBRIS. Results: Vowel and consonant formants are the most relevant acoustic parameters to reflect PD medication changes. Since relevant speech segments (consonants and vowels) contains minority of speech energy, intelligibility can be improved by amplifying the voice signal using amplitude compression. Pause detection and peak to average power rate calculations for voice segmentation produce rich voice features in real time. Enhancements in voice segmentation can be done by inducing Zero-Crossing rate (ZCR). Consonants have high ZCR whereas vowels have low ZCR. Wavelet transform is found promising for voice analysis since it quantizes non-stationary voice signals over time-series using scale and translation parameters. In this way voice intelligibility in the waveforms can be analyzed in each time frame. Conclusions: This review evaluated HKD recognition algorithms to develop a tool for PD speech home-assessment using modern mobile technology. An algorithm that tackles realtime constraints in HKD recognition based on the review synthesis is proposed. We suggest that speech features may be further processed using wavelet transforms and used with a neural network for detection and quantification of speech anomalies related to PD. Based on this model, patients' speech can be automatically categorized according to UPDRS speech ratings.