887 resultados para fast Fourier-transform algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Materials obtained during the synthesis of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) were characterized by Fourier transform infrared (FTIR) transmission spectroscopy and/or Fourier transform infrared photoacoustic spectroscopy (FTIR-PAS). By these techniques the spectrometric alterations that occurred during the process were observed. The characterized species during the synthesis of HMX were alpha-HMX, beta-HMX, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) and HMX/RDX mixtures. The FTIR-PAS was verified to be a promising technique of great usefulness of the characterization of highly energetic materials because it is fast, simple and requires no sample preparation unlike Fourier transform infrared transmission technique (KBr pellet). The FTIR-PAS analysis showed that with small sample quantity is possible to distinguish between thealpha-HMX and beta-HMX and to detect even in a qualitative way different HMX / RDX ratios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Controlling the quality variables (such as basis weight, moisture etc.) is a vital part of making top quality paper or board. In this thesis, an advanced data assimilation tool is applied to the quality control system (QCS) of a paper or board machine. The functionality of the QCS is based on quality observations that are measured with a traversing scanner making a zigzag path. The basic idea is the following: The measured quality variable has to be separated into its machine direction (MD) and cross direction (CD) variations due to the fact that the QCS works separately in MD and CD. Traditionally this is done simply by assuming one scan of the zigzag path to be the CD profile and its mean value to be one point of the MD trend. In this thesis, a more advanced method is introduced. The fundamental idea is to use the signals’ frequency components to represent the variation in both CD and MD. To be able to get to the frequency domain, the Fourier transform is utilized. The frequency domain, that is, the Fourier components are then used as a state vector in a Kalman filter. The Kalman filter is a widely used data assimilation tool to combine noisy observations with a model. The observations here refer to the quality measurements and the model to the Fourier frequency components. By implementing the two dimensional Fourier transform into the Kalman filter, we get an advanced tool for the separation of CD and MD components in total variation or, to be more general, for data assimilation. A piece of a paper roll is analyzed and this tool is applied to model the dataset. As a result, it is clear that the Kalman filter algorithm is able to reconstruct the main features of the dataset from a zigzag path. Although the results are made with a very short sample of paper roll, it seems that this method has great potential to be used later on as a part of the quality control system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, hierarchical cluster analysis (HCA) and principal component analysis (PCA) were used to classify blends produced from diesel S500 and different kinds of biodiesel produced by the TDSP methodology. The different kinds of biodiesel studied in this work were produced from three raw materials: soybean oil, waste cooking oil and hydrogenated vegetable oil. Methylic and ethylic routes were employed for the production of biodiesel. HCA and PCA were performed on the data from attenuated total reflectance Fourier transform infrared spectroscopy, showing the separation of the blends into groups according to biodiesel content present in the blends and to the kind of biodiesel used to form the mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fast simulated annealing algorithm is developed for automatic object recognition. The normalized correlation coefficient is used as a measure of the match between a hypothesized object and an image. Templates are generated on-line during the search by transforming model images. Simulated annealing reduces the search time by orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of how landmarks, for example, traffic signs, can be recognized by an autonomous vehicle or a navigating robot. The algorithm works well in noisy, real-world images of complicated scenes for model images with high information content.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta Tesis se presenta el modelo de Kou, Difusión con saltos doble exponenciales, para la valoración de opciones Call de tipo europeo sobre los precios del petróleo como activo subyacente. Se mostrarán los cálculos numéricos para la formulación de expresiones analíticas que se resolverán mediante la implementación de algoritmos numéricos eficientes que conllevaran a los precios teóricos de las opciones evaluadas. Posteriormente se discutirán las ventajas de usar métodos como la transformada de Fourier por la sencillez relativa de su programación frente a los desarrollos de otras técnicas numéricas. Este método es usado en conjunto con el ejercicio de calibración no paramétrica de regularización, que mediante la minimización de los errores al cuadrado sujeto a una penalización fundamentada en el concepto de entropía relativa, resultaran en la obtención de precios para las opciones Call sobre el petróleo considerando una mejor capacidad del modelo de asignar precios justos frente a los transados en el mercado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents an overview of a transform method for solving linear and integrable nonlinear partial differential equations. This new transform method, proposed by Fokas, yields a generalization and unification of various fundamental mathematical techniques and, in particular, it yields an extension of the Fourier transform method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the first time, vertical column measurements of (HNO3) above the Arctic Stratospheric Ozone Observatory (AStrO) at Eureka (80N, 86W), Canada, have been made during polar night using lunar spectra recorded with a Fourier Transform Infrared (FTIR) spectrometer, from October 2001 to March 2002. AStrO is part of the primary Arctic station of the Network for the Detection of Stratospheric Change (NDSC). These measurements were compared with FTIR measurements at two other NDSC Arctic sites: Thule, Greenland (76.5N, 68.8W) and Kiruna, Sweden (67.8N, 20.4E). The measurements were also compared with two atmospheric models: the Canadian Middle Atmosphere Model (CMAM) and SLIMCAT. This is the first time that CMAM HNO3 columns have been compared with observations in the Arctic. Eureka lunar measurements are in good agreement with solar ones made with the same instrument. Eureka and Thule HNO3 columns are consistent within measurement error. Differences among HNO3 columns measured at Kiruna and those measured at Eureka and Thule can be explained on the basis of the available sunlight hours and the polar vortex location. The comparison of CMAM HNO3 columns with Eureka and Kiruna data shows good agreement, considering CMAM small inter-annual variability. The warm 2001/02 winter with almost no Polar Stratospheric Clouds (PSCs) makes the comparison of the warm climate version of CMAM with these observations a good test for CMAM under no PSC conditions. SLIMCAT captures the magnitude of HNO3 columns at Eureka, and the day-to-day variability, but generally reports higher HNO3 columns than the CMAM climatological mean columns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pipeline leak detection is a matter of great interest for companies who transport petroleum and its derivatives, in face of rising exigencies of environmental policies in industrialized and industrializing countries. However, existing technologies are not yet fully consolidated and many studies have been accomplished in order to achieve better levels of sensitivity and reliability for pipeline leak detection in a wide range of flowing conditions. In this sense, this study presents the results obtained from frequency spectrum analysis of pressure signals from pipelines in several flowing conditions like normal flowing, leakages, pump switching, etc. The results show that is possible to distinguish between the frequency spectra of those different flowing conditions, allowing recognition and announce of liquid pipeline leakages from pressure monitoring. Based upon these results, a pipeline leak detection algorithm employing frequency analysis of pressure signals is proposed, along with a methodology for its tuning and calibration. The proposed algorithm and its tuning methodology are evaluated with data obtained from real leakages accomplished in pipelines transferring crude oil and water, in order to evaluate its sensitivity, reliability and applicability to different flowing conditions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electric energy is essential to the development of modern society and its increasing demand in recent years, effect from population and economic growth, becomes the companies more interested in the quality and continuity of supply, factors regulated by ANEEL (Agência Nacional de Energia Elétrica). These factors must be attended when a permanent fault occurs in the system, where the defect location that caused the power interruption should be identified quickly, which is not a simple assignment because the current systems complexity. An example of this occurs in multiple terminals transmission lines, which interconnect existing circuits to feed the demand. These transmission lines have been adopted as a feasible solution to suply loads of magnitudes that do not justify economically the construction of new substations. This paper presents a fault location algorithm for multiple terminals transmission lines - two and three terminals. The location method is based on the use of voltage and current fundamental phasors, as well as the representation of the line through its series impedance. The wavelet transform is an effective mathematical tool in signals analysis with discontinuities and, therefore, is used to synchronize voltage and current data. The Fourier transform is another tool used in this work for extract voltage and current fundamental phasors. Tests to validate the location algorithm applicability used data from faulty signals simulated in ATP (Alternative Transients Program) as well as real data obtained from oscillographic recorders installed on CHESF s lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)