943 resultados para Fourier analysis
Resumo:
The Weyl-Wigner correspondence prescription, which makes great use of Fourier duality, is reexamined from the point of view of Kac algebras, the most general background for noncommutative Fourier analysis allowing for that property. It is shown how the standard Kac structure has to be extended in order to accommodate the physical requirements. Both an Abelian and a symmetric projective Kac algebra are shown to provide, in close parallel to the standard case, a new dual framework and a well-defined notion of projective Fourier duality for the group of translations on the plane. The Weyl formula arises naturally as an irreducible component of the duality mapping between these projective algebras.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Wavelet analysis offers an alternative to Fourier based time-series analysis, and is particularly useful when the amplitudes and periods of dominant cycles are time dependent. We analyse climatic records derived from oxygen isotopic ratios of marine sediment cores with modified Morlet wavelets. We use a normalization of the Morlet wavelets which allows direct correspondence with Fourier analysis. This provides a direct view of the oscillations at various frequencies, and illustrates the nature of the time-dependence of the dominant cycles.
Resumo:
Date-32 is a fast and easily used computer program developed to date Quaternary deep-sea cores by associating variations in the earth's orbit with recurring oscillations in core properties, such as carbonate content or isotope composition. Starting with known top and bottom dates, distortions in the periodicities of the core properties due to varying sedimentation rates are realigned by fast Fourier analysis so as to maximise the spectral energy density at the orbital frequencies. This allows age interpolation to all parts of the core to an accuracy of 10 kyrs, or about 1.5% of the record duration for a typical Brunhes sequence. The influence of astronomical forcing is examined and the method is applied to provide preliminary dates in a high-resolution Brunhes record from DSDP Site 594 off southeastern New Zealand.
Resumo:
Sediment samples from both Site 165-999/165-1000 (Atlantic) and Site 202-1241 (Pacific) were chosen at 1Ma intervals over the period 0.3-9.3Ma. Samples were washed and sieved <150µm. Splits of the sediment fraction were picked completely to obtain, where possible, at least 30 specimens each of planktic foraminifer species Globigerinoides sacculifer and Globorotalia tumida, on which outline analysis (Fourier) was performed. Sea surface and thermocline temperatures were reconstructed from palaeoenvironmental proxies (UK37' and Tex86H respectively).
Resumo:
Bibliography: p. 23.
Resumo:
This thesis was concerned with investigating methods of improving the IOP pulse’s potential as a measure of clinical utility. There were three principal sections to the work. 1. Optimisation of measurement and analysis of the IOP pulse. A literature review, covering the years 1960 – 2002 and other relevant scientific publications, provided a knowledge base on the IOP pulse. Initial studies investigated suitable instrumentation and measurement techniques. Fourier transformation was identified as a promising method of analysing the IOP pulse and this technique was developed. 2. Investigation of ocular and systemic variables that affect IOP pulse measurements In order to recognise clinically important changes in IOP pulse measurement, studies were performed to identify influencing factors. Fourier analysis was tested against traditional parameters in order to assess its ability to detect differences in IOP pulse. In addition, it had been speculated that the waveform components of the IOP pulse contained vascular characteristic analogous to those components found in arterial pulse waves. Validation studies to test this hypothesis were attempted. 3. The nature of the intraocular pressure pulse in health and disease and its relation to systemic cardiovascular variables. Fourier analysis and traditional parameters were applied to the IOP pulse measurements taken on diseased and healthy eyes. Only the derived parameter, pulsatile ocular blood flow (POBF) detected differences in diseased groups. The use of an ocular pressure-volume relationship may have improved the POBF measure’s variance in comparison to the measurement of the pulse’s amplitude or Fourier components. Finally, the importance of the driving force of pulsatile blood flow, the arterial pressure pulse, is highlighted. A method of combining the measurements of pulsatile blood flow and pulsatile blood pressure to create a measure of ocular vascular impedance is described along with its advantages for future studies.
Resumo:
* Supported by INTAS 2000-626, INTAS YSF 03-55-1969, INTAS INNO 182, and TIC 2003-09319-c03-03.
Resumo:
The classifier support vector machine is used in several problems in various areas of knowledge. Basically the method used in this classier is to end the hyperplane that maximizes the distance between the groups, to increase the generalization of the classifier. In this work, we treated some problems of binary classification of data obtained by electroencephalography (EEG) and electromyography (EMG) using Support Vector Machine with some complementary techniques, such as: Principal Component Analysis to identify the active regions of the brain, the periodogram method which is obtained by Fourier analysis to help discriminate between groups and Simple Moving Average to eliminate some of the existing noise in the data. It was developed two functions in the software R, for the realization of training tasks and classification. Also, it was proposed two weights systems and a summarized measure to help on deciding in classification of groups. The application of these techniques, weights and the summarized measure in the classier, showed quite satisfactory results, where the best results were an average rate of 95.31% to visual stimuli data, 100% of correct classification for epilepsy data and rates of 91.22% and 96.89% to object motion data for two subjects.
Resumo:
We propose a study of the mathematical properties of voice as an audio signal -- This work includes signals in which the channel conditions are not ideal for emotion recognition -- Multiresolution analysis- discrete wavelet transform – was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states -- ANNs proved to be a system that allows an appropriate classification of such states -- This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features -- Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify
Resumo:
The modelling of diffusive terms in particle methods is a delicate matter and several models were proposed in the literature to take such terms into account. The diffusion velocity method (DVM), originally designed for the diffusion of passive scalars, turns diffusive terms into convective ones by expressing them as a divergence involving a so-called diffusion velocity. In this paper, DVM is extended to the diffusion of vectorial quantities in the three-dimensional Navier–Stokes equations, in their incompressible, velocity–vorticity formulation. The integration of a large eddy simulation (LES) turbulence model is investigated and a DVM general formulation is proposed. Either with or without LES, a novel expression of the diffusion velocity is derived, which makes it easier to approximate and which highlights the analogy with the original formulation for scalar transport. From this statement, DVM is then analysed in one dimension, both analytically and numerically on test cases to point out its good behaviour.
Resumo:
The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time.