882 resultados para Inverse Discrete Fourier Transform
Resumo:
In this correspondence, the conditions to use any kind of discrete cosine transform (DCT) for multicarrier data transmission are derived. The symmetric convolution-multiplication property of each DCT implies that when symmetric convolution is performed in the time domain, an element-by-element multiplication is performed in the corresponding discrete trigonometric domain. Therefore, appending symmetric redun-dancy (as prefix and suffix) into each data symbol to be transmitted, and also enforcing symmetry for the equivalent channel impulse response, the linear convolution performed in the transmission channel becomes a symmetric convolution in those samples of interest. Furthermore, the channel equalization can be carried out by means of a bank of scalars in the corresponding discrete cosine transform domain. The expressions for obtaining the value of each scalar corresponding to these one-tap per subcarrier equalizers are presented. This study is completed with several computer simulations in mobile broadband wireless communication scenarios, considering the presence of carrier frequency offset (CFO). The obtained results indicate that the proposed systems outperform the standardized ones based on the DFT.
Resumo:
El presente proyecto final de carrera titulado “Modelado de alto nivel con SystemC” tiene como objetivo principal el modelado de algunos módulos de un codificador de vídeo MPEG-2 utilizando el lenguaje de descripción de sistemas igitales SystemC con un nivel de abstracción TLM o Transaction Level Modeling. SystemC es un lenguaje de descripción de sistemas digitales basado en C++. En él hay un conjunto de rutinas y librerías que implementan tipos de datos, estructuras y procesos especiales para el modelado de sistemas digitales. Su descripción se puede consultar en [GLMS02] El nivel de abstracción TLM se caracteriza por separar la comunicación entre los módulos de su funcionalidad. Este nivel de abstracción hace un mayor énfasis en la funcionalidad de la comunicación entre los módulos (de donde a donde van datos) que la implementación exacta de la misma. En los documentos [RSPF] y [HG] se describen el TLM y un ejemplo de implementación. La arquitectura del modelo se basa en el codificador MVIP-2 descrito en [Gar04], de dicho modelo, los módulos implementados son: · IVIDEOH: módulo que realiza un filtrado del vídeo de entrada en la dimensión horizontal y guarda en memoria el video filtrado. · IVIDEOV: módulo que lee de la memoria el vídeo filtrado por IVIDEOH, realiza el filtrado en la dimensión horizontal y escribe el video filtrado en memoria. · DCT: módulo que lee el video filtrado por IVIDEOV, hace la transformada discreta del coseno y guarda el vídeo transformado en la memoria. · QUANT: módulo que lee el video transformado por DCT, lo cuantifica y guarda el resultado en la memoria. · IQUANT: módulo que lee el video cuantificado por QUANT, realiza la cuantificación inversa y guarda el resultado en memoria. · IDCT: módulo que lee el video procesado por IQUANT, realiza la transformada inversa del coseno y guarda el resultado en memoria. · IMEM: módulo que hace de interfaz entre los módulos anteriores y la memoria. Gestiona las peticiones simultáneas de acceso a la memoria y asegura el acceso exclusivo a la memoria en cada instante de tiempo. Todos estos módulos aparecen en gris en la siguiente figura en la que se muestra la arquitectura del modelo: Figura 1. Arquitectura del modelo (VER PDF DEL PFC) En figura también aparecen unos módulos en blanco, dichos módulos son de pruebas y se han añadido para realizar simulaciones y probar los módulos del modelo: · CAMARA: módulo que simula una cámara en blanco y negro, lee la luminancia de un fichero de vídeo y lo envía al modelo a través de una FIFO. · FIFO: hace de interfaz entre la cámara y el modelo, guarda los datos que envía la cámara hasta que IVIDEOH los lee. · CONTROL: módulo que se encarga de controlar los módulos que procesan el vídeo, estos le indican cuando terminan de procesar un frame de vídeo y este módulo se encarga de iniciar los módulos que sean necesarios para seguir con la codificación. Este módulo se encarga del correcto secuenciamiento de los módulos procesadores de vídeo. · RAM: módulo que simula una memoria RAM, incluye un retardo programable en el acceso. Para las pruebas también se han generado ficheros de vídeo con el resultado de cada módulo procesador de vídeo, ficheros con mensajes y un fichero de trazas en el que se muestra el secuenciamiento de los procesadores. Como resultado del trabajo en el presente PFC se puede concluir que SystemC permite el modelado de sistemas digitales con bastante sencillez (hace falta conocimientos previos de C++ y programación orientada objetos) y permite la realización de modelos con un nivel de abstracción mayor a RTL, el habitual en Verilog y VHDL, en el caso del presente PFC, el TLM. ABSTRACT This final career project titled “High level modeling with SystemC” have as main objective the modeling of some of the modules of an MPEG-2 video coder using the SystemC digital systems description language at the TLM or Transaction Level Modeling abstraction level. SystemC is a digital systems description language based in C++. It contains routines and libraries that define special data types, structures and process to model digital systems. There is a complete description of the SystemC language in the document [GLMS02]. The main characteristic of TLM abstraction level is that it separates the communication among modules of their functionality. This abstraction level puts a higher emphasis in the functionality of the communication (from where to where the data go) than the exact implementation of it. The TLM and an example are described in the documents [RSPF] and [HG]. The architecture of the model is based in the MVIP-2 video coder (described in the document [Gar04]) The modeled modules are: · IVIDEOH: module that filter the video input in the horizontal dimension. It saves the filtered video in the memory. · IVIDEOV: module that read the IVIDEOH filtered video, filter it in the vertical dimension and save the filtered video in the memory. · DCT: module that read the IVIDEOV filtered video, do the discrete cosine transform and save the transformed video in the memory. · QUANT: module that read the DCT transformed video, quantify it and save the quantified video in the memory. · IQUANT: module that read the QUANT processed video, do the inverse quantification and save the result in the memory. · IDCT: module that read the IQUANT processed video, do the inverse cosine transform and save the result in the memory. · IMEM: this module is the interface between the modules described previously and the memory. It manage the simultaneous accesses to the memory and ensure an unique access at each instant of time All this modules are included in grey in the following figure (SEE PDF OF PFC). This figure shows the architecture of the model: Figure 1. Architecture of the model This figure also includes other modules in white, these modules have been added to the model in order to simulate and prove the modules of the model: · CAMARA: simulates a black and white video camera, it reads the luminance of a video file and sends it to the model through a FIFO. · FIFO: is the interface between the camera and the model, it saves the video data sent by the camera until the IVIDEOH module reads it. · CONTROL: controls the modules that process the video. These modules indicate the CONTROL module when they have finished the processing of a video frame. The CONTROL module, then, init the necessary modules to continue with the video coding. This module is responsible of the right sequence of the video processing modules. · RAM: it simulates a RAM memory; it also simulates a programmable delay in the access to the memory. It has been generated video files, text files and a trace file to check the correct function of the model. The trace file shows the sequence of the video processing modules. As a result of the present final career project, it can be deduced that it is quite easy to model digital systems with SystemC (it is only needed previous knowledge of C++ and object oriented programming) and it also allow the modeling with a level of abstraction higher than the RTL used in Verilog and VHDL, in the case of the present final career project, the TLM.
Resumo:
Electrical and magnetic brain waves of two subjects were recorded for the purpose of recognizing which one of 12 sentences or seven words auditorily presented was processed. The analysis consisted of averaging over trials to create prototypes and test samples, to each of which a Fourier transform was applied, followed by filtering and an inverse transformation to the time domain. The filters used were optimal predictive filters, selected for each subject. A still further improvement was obtained by taking differences between recordings of two electrodes to obtain bipolar pairs that then were used for the same analysis. Recognition rates, based on a least-squares criterion, varied, but the best were above 90%. The first words of prototypes of sentences also were cut and pasted to test, at least partially, the invariance of a word’s brain wave in different sentence contexts. The best result was above 80% correct recognition. Test samples made up only of individual trials also were analyzed. The best result was 134 correct of 288 (47%), which is promising, given that the expected recognition number by chance is just 24 (or 8.3%). The work reported in this paper extends our earlier work on brain-wave recognition of words only. The recognition rates reported here further strengthen the case that recordings of electric brain waves of words or sentences, together with extensive mathematical and statistical analysis, can be the basis of new developments in our understanding of brain processing of language.
Resumo:
Baseando-se em um sistema com um grau de liberdade, é apresentada neste trabalho a equação de movimento, bem como a sua resolução através das Transformadas de Fourier e da Transformada Rápida de Fourier (FFT). Através da análise da forma como são feitas as integrações nas transformadas, foram estudados e aplicados os ponderadores de Newton-Cotes na resolução da equação de movimento, de forma a aumentar substancialmente a precisão dos resultados em comparação com a forma convencional da Transformada de Fourier.
Resumo:
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of multichannel seismic data. The considered time–frequency transforms include the continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform. The developed approaches provide a fast and precise time–frequency examination of the seismograms at different frequency bands. Moreover, filtering methods for noise, transients or even baseline removal, are implemented. The primary motivation is to support seismologists with a user-friendly and fast program for the wavelet analysis, providing practical and understandable results.
Resumo:
A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time–period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.
Resumo:
Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectral domain/swept source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional spatial properties of the sample, including its morphological layout and optical scatter. The morphological layout can be reconstructed in software via three-dimensional discrete Fourier transformation. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present a theoretical and experimental study of the imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.
Resumo:
Using the integrable nonlinear Schrodinger equation (NLSE) as a channel model, we describe the application of nonlinear spectral management for effective mitigation of all nonlinear distortions induced by the fiber Kerr effect. Our approach is a modification and substantial development of the so-called eigenvalue communication idea first presented in A. Hasegawa, T. Nyu, J. Lightwave Technol. 11, 395 (1993). The key feature of the nonlinear Fourier transform (inverse scattering transform) method is that for the NLSE, any input signal can be decomposed into the so-called scattering data (nonlinear spectrum), which evolve in a trivial manner, similar to the evolution of Fourier components in linear equations. We consider here a practically important weakly nonlinear transmission regime and propose a general method of the effective encoding/modulation of the nonlinear spectrum: The machinery of our approach is based on the recursive Fourier-type integration of the input profile and, thus, can be considered for electronic or all-optical implementations. We also present a novel concept of nonlinear spectral pre-compensation, or in other terms, an effective nonlinear spectral pre-equalization. The proposed general technique is then illustrated through particular analytical results available for the transmission of a segment of the orthogonal frequency division multiplexing (OFDM) formatted pattern, and through WDM input based on Gaussian pulses. Finally, the robustness of the method against the amplifier spontaneous emission is demonstrated, and the general numerical complexity of the nonlinear spectrum usage is discussed. © 2013 Optical Society of America.
Resumo:
The paper deals with the generalisations of the Hough Transform making it the mean for analysing uncertainty. Some results related Hough Transform for Euclidean spaces are represented. These latter use the powerful means of the Generalised Inverse for description the Transform by itself as well as its Accumulator Function.
Resumo:
Mathematics Subject Classification: Primary 33E20, 44A10; Secondary 33C10, 33C20, 44A20
Resumo:
Recent studies have shown evidence of log-periodic behavior in non-hierarchical systems. An interesting fact is the emergence of such properties on rupture and breakdown of complex materials and financial failures. These may be examples of systems with self-organized criticality (SOC). In this work we study the detection of discrete scale invariance or log-periodicity. Theoretically showing the effectiveness of methods based on the Fourier Transform of the log-periodicity detection not only with prior knowledge of the critical point before this point as well. Specifically, we studied the Brazilian financial market with the objective of detecting discrete scale invariance in Bovespa (Bolsa de Valores de S˜ao Paulo) index. Some historical series were selected periods in 1999, 2001 and 2008. We report evidence for the detection of possible log-periodicity before breakage, shown its applicability to the study of systems with discrete scale invariance likely in the case of financial crashes, it shows an additional evidence of the possibility of forecasting breakage
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The main objective of this work was to develop a novel dimensionality reduction technique as a part of an integrated pattern recognition solution capable of identifying adulterants such as hazelnut oil in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. A novel Continuous Locality Preserving Projections (CLPP) technique is proposed which allows the modelling of the continuous nature of the produced in-house admixtures as data series instead of discrete points. The maintenance of the continuous structure of the data manifold enables the better visualisation of this examined classification problem and facilitates the more accurate utilisation of the manifold for detecting the adulterants. The performance of the proposed technique is validated with two different spectroscopic techniques (Raman and Fourier transform infrared, FT-IR). In all cases studied, CLPP accompanied by k-Nearest Neighbors (kNN) algorithm was found to outperform any other state-of-the-art pattern recognition techniques.
Resumo:
In this paper, we propose an orthogonal chirp division multiplexing (OCDM) technique for coherent optical communication. OCDM is the principle of orthogonally multiplexing a group of linear chirped waveforms for high-speed data communication, achieving the maximum spectral efficiency (SE) for chirp spread spectrum, in a similar way as the orthogonal frequency division multiplexing (OFDM) does for frequency division multiplexing. In the coherent optical (CO)-OCDM, Fresnel transform formulates the synthesis of the orthogonal chirps; discrete Fresnel transform (DFnT) realizes the CO-OCDM in the digital domain. As both the Fresnel and Fourier transforms are trigonometric transforms, the CO-OCDM can be easily integrated into the existing CO-OFDM systems. Analyses and numerical results are provided to investigate the transmission of CO-OCDM signals over optical fibers. Moreover, experiments of 36-Gbit/s CO-OCDM signal are carried out to validate the feasibility and confirm the analyses. It is shown that the CO-OCDM can effectively compensate the dispersion and is more resilient to fading and noise impairment than OFDM.