909 resultados para transform-based
Resumo:
This paper introduces a novel method of estimating theFourier transform of deterministic continuous-time signals from a finite number N of their nonuniformly spaced measurements. These samples, located at a mixture of deterministic and random time instants, are collected at sub-Nyquist rates since no constraints are imposed on either the bandwidth or the spectral support of the processed signal. It is shown that the proposed estimation approach converges uniformly for all frequencies at the rate N^−5 or faster. This implies that it significantly outperforms its alias-free-sampling-based predecessors, namely stratified and antithetical stratified estimates, which are shown to uniformly convergence at a rate of N^−1. Simulations are presented to demonstrate the superior performance and low complexity of the introduced technique.
Resumo:
Partial moments are extensively used in actuarial science for the analysis of risks. Since the first order partial moments provide the expected loss in a stop-loss treaty with infinite cover as a function of priority, it is referred as the stop-loss transform. In the present work, we discuss distributional and geometric properties of the first and second order partial moments defined in terms of quantile function. Relationships of the scaled stop-loss transform curve with the Lorenz, Gini, Bonferroni and Leinkuhler curves are developed
Resumo:
Calculations of the absorption of solar radiation by atmospheric gases, and water vapor in particular, are dependent on the quality of databases of spectral line parameters. There has been increasing scrutiny of databases such as HITRAN in recent years, but this has mostly been performed on a band-by-band basis. We report nine high-spectral-resolution (0.03 cm(-1)) measurements of the solar radiation reaching the surface in southern England over the wave number range 2000 to 12,500 cm(-1) (0.8 to 5 mm) that allow a unique assessment of the consistency of the spectral line databases over this entire spectral region. The data are assessed in terms of the modeled water vapor column that is required to bring calculations and observations into agreement; for an entirely consistent database, this water vapor column should be constant with frequency. For the HITRAN01 database, the spread in water vapor column is about 11%, with distinct shifts between different spectral regions. The HITRAN04 database is in significantly better agreement (about 5% spread) in the completely updated 3000 to 8000 cm(-1) spectral region, but inconsistencies between individual spectral regions remain: for example, in the 8000 to 9500 cm(-1) spectral region, the results indicate an 18% (+/- 1%) underestimate in line intensities with respect to the 3000 to 8000 cm(-1) region. These measurements also indicate the impact of isotopic fractionation of water vapor in the 2500 to 2900 cm(-1) range, where HDO lines dominate over the lines of the most abundant isotope of H2O.
Resumo:
We report on the consistency of water vapour line intensities in selected spectral regions between 800–12,000 cm−1 under atmospheric conditions using sun-pointing Fourier transform infrared spectroscopy. Measurements were made across a number of days at both a low and high altitude field site, sampling a relatively moist and relatively dry atmosphere. Our data suggests that across most of the 800–12,000 cm−1 spectral region water vapour line intensities in recent spectral line databases are generally consistent with what was observed. However, we find that HITRAN-2008 water vapour line intensities are systematically lower by up to 20% in the 8000–9200 cm−1 spectral interval relative to other spectral regions. This discrepancy is essentially removed when two new linelists (UCL08, a compilation of linelists and ab-initio calculations, and one based on recent laboratory measurements by Oudot et al. (2010) [10] in the 8000–9200 cm−1 spectral region) are used. This strongly suggests that the H2O line strengths in the HITRAN-2008 database are indeed underestimated in this spectral region and in need of revision. The calculated global-mean clear-sky absorption of solar radiation is increased by about 0.3 W m−2 when using either the UCL08 or Oudot line parameters in the 8000–9200 cm−1 region, instead of HITRAN-2008. We also found that the effect of isotopic fractionation of HDO is evident in the 2500–2900 cm−1 region in the observations.
Resumo:
A detailed spectrally-resolved extraterrestrial solar spectrum (ESS) is important for line-by-line radiative transfer modeling in the near-infrared (near-IR). Very few observationally-based high-resolution ESS are available in this spectral region. Consequently the theoretically-calculated ESS by Kurucz has been widely adopted. We present the CAVIAR (Continuum Absorption at Visible and Infrared Wavelengths and its Atmospheric Relevance) ESS which is derived using the Langley technique applied to calibrated observations using a ground-based high-resolution Fourier transform spectrometer (FTS) in atmospheric windows from 2000–10000 cm-1 (1–5 μm). There is good agreement between the strengths and positions of solar lines between the CAVIAR and the satellite-based ACE-FTS (Atmospheric Chemistry Experiment-FTS) ESS, in the spectral region where they overlap, and good agreement with other ground-based FTS measurements in two near-IR windows. However there are significant differences in the structure between the CAVIAR ESS and spectra from semi-empirical models. In addition, we found a difference of up to 8 % in the absolute (and hence the wavelength-integrated) irradiance between the CAVIAR ESS and that of Thuillier et al., which was based on measurements from the Atmospheric Laboratory for Applications and Science satellite and other sources. In many spectral regions, this difference is significant, as the coverage factor k = 2 (or 95 % confidence limit) uncertainties in the two sets of observations do not overlap. Since the total solar irradiance is relatively well constrained, if the CAVIAR ESS is correct, then this would indicate an integrated “loss” of solar irradiance of about 30 W m-2 in the near-IR that would have to be compensated by an increase at other wavelengths.
Resumo:
We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three datasets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier Transform Spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier Transform Infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Lab at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and between 1.5 and 3 for PEARL-FTS, while ACE-FTS has considerably more information (roughly 1° of freedom per altitude level). We take partial columns between roughly 5 and 30 km for the ACE-FTS–PEARL-FTS comparison, and between 5 and 10 km for the other pairs. The DOFS for the partial columns are between 1.2 and 2 for PEARL-FTS collocated with ACE-FTS, between 0.1 and 0.5 for PEARL-FTS collocated with TANSO-FTS or for TANSO-FTS collocated with either other instrument, while ACE-FTS has much higher information content. For all pairs, the partial column differences are within ± 3 × 1022 molecules cm−2. Expressed as median ± median absolute deviation (expressed in absolute or relative terms), these differences are 0.11 ± 9.60 × 10^20 molecules cm−2 (0.012 ± 1.018 %) for TANSO-FTS–PEARL-FTS, −2.6 ± 2.6 × 10^21 molecules cm−2 (−1.6 ± 1.6 %) for ACE-FTS–PEARL-FTS, and 7.4 ± 6.0 × 10^20 molecules cm−2 (0.78 ± 0.64 %) for TANSO-FTS–ACE-FTS. The differences for ACE-FTS–PEARL-FTS and TANSO-FTS–PEARL-FTS partial columns decrease significantly as a function of PEARL partial columns, whereas the range of partial column values for TANSO-FTS–ACE-FTS collocations is too small to draw any conclusion on its dependence on ACE-FTS partial columns.
Resumo:
This paper presents a technique for oriented texture classification which is based on the Hough transform and Kohonen's neural network model. In this technique, oriented texture features are extracted from the Hough space by means of two distinct strategies. While the first operates on a non-uniformly sampled Hough space, the second concentrates on the peaks produced in the Hough space. The described technique gives good results for the classification of oriented textures, a common phenomenon in nature underlying an important class of images. Experimental results are presented to demonstrate the performance of the new technique in comparison, with an implemented technique based on Gabor filters.
Resumo:
In this paper we propose a novel method for shape analysis called HTS (Hough Transform Statistics), which uses statistics from Hough Transform space in order to characterize the shape of objects in digital images. Experimental results showed that the HTS descriptor is robust and presents better accuracy than some traditional shape description methods. Furthermore, HTS algorithm has linear complexity, which is an important requirement for content based image retrieval from large databases. © 2013 IEEE.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Non-Hodgkin lymphomas are of many distinct types, and different classification systems make it difficult to diagnose them correctly. Many of these systems classify lymphomas only based on what they look like under a microscope. In 2008 the World Health Organisation (WHO) introduced the most recent system, which also considers the chromosome features of the lymphoma cells and the presence of certain proteins on their surface. The WHO system is the one that we apply in this work. Herewith we present an automatic method to classify histological images of three types of non-Hodgkin lymphoma. Our method is based on the Stationary Wavelet Transform (SWT), and it consists of three steps: 1) extracting sub-bands from the histological image through SWT, 2) applying Analysis of Variance (ANOVA) to clean noise and select the most relevant information, 3) classifying it by the Support Vector Machine (SVM) algorithm. The kernel types Linear, RBF and Polynomial were evaluated with our method applied to 210 images of lymphoma from the National Institute on Aging. We concluded that the following combination led to the most relevant results: detail sub-band, ANOVA and SVM with Linear and RBF kernels.
Resumo:
Despite the efficacy of minutia-based fingerprint matching techniques for good-quality images captured by optical sensors, minutia-based techniques do not often perform so well on poor-quality images or fingerprint images captured by small solid-state sensors. Solid-state fingerprint sensors are being increasingly deployed in a wide range of applications for user authentication purposes. Therefore, it is necessary to develop new fingerprint-matching techniques that utilize other features to deal with fingerprint images captured by solid-state sensors. This paper presents a new fingerprint matching technique based on fingerprint ridge features. This technique was assessed on the MSU-VERIDICOM database, which consists of fingerprint impressions obtained from 160 users (4 impressions per finger) using a solid-state sensor. The combination of ridge-based matching scores computed by the proposed ridge-based technique with minutia-based matching scores leads to a reduction of the false non-match rate by approximately 1.7% at a false match rate of 0.1%. © 2005 IEEE.
Resumo:
The influence of shear fields on water-based systems was investigated within this thesis. The non-linear rheological behaviour of spherical and rod-like particles was examined with Fourier-Transform rheology under LAOS conditions. As a model system for spherical particles two different kinds of polystyrene dispersions, with a solid content higher than 0.3 each, were synthesised within this work. Due to the differences in polydispersity and Debye-length, differences were also found in the rheology. In the FT-rheology both kinds of dispersions showed a similar rise in the intensities of the magnitudes of the odd higher harmonics, which were predicted by a model. The in some cases additionally appearing second harmonics were not predicted. A novel method to analyse the time domain signal was developed, that splits the time domain signal up in four characteristic functions. Those characteristic functions correspond to rheological phenomena. In some cases the intensities of the Fourier components can interfere negatively. FD-virus particles were used as a rod-like model system, which already shows a highly non-linear behaviour at concentrations below 1. % wt. Predictions for the dependence of the higher harmonics from the strain amplitude described the non-linear behaviour well at large, but no so good at small strain amplitudes. Additionally the trends of the rheological behaviour could be described by a theory for rod-like particles. An existing rheo-optical set-up was enhanced by reducing the background birefringence by a factor of 20 and by increasing the time resolution by a factor of 24. Additionally a combination of FT-rheology and rheo-optics was achieved. The influence of a constant shear field on the crystallisation process of zinc oxide in the presence of a polymer was examined. The crystallites showed a reduction in length by a factor of 2. The directed addition of polymers in combination with a defined shear field can be an easy way for a defined change of the form of crystallites.
Resumo:
Adaptive embedded systems are required in various applications. This work addresses these needs in the area of adaptive image compression in FPGA devices. A simplified version of an evolution strategy is utilized to optimize wavelet filters of a Discrete Wavelet Transform algorithm. We propose an adaptive image compression system in FPGA where optimized memory architecture, parallel processing and optimized task scheduling allow reducing the time of evolution. The proposed solution has been extensively evaluated in terms of the quality of compression as well as the processing time. The proposed architecture reduces the time of evolution by 44% compared to our previous reports while maintaining the quality of compression unchanged with respect to existing implementations. The system is able to find an optimized set of wavelet filters in less than 2 min whenever the input type of data changes.
Resumo:
The wavelet transform and Lipschitz exponent perform well in detecting signal singularity.With the bridge crack damage modeled as rotational springs based on fracture mechanics, the deflection time history of the beam under the moving load is determined with a numerical method. The continuous wavelet transformation (CWT) is applied to the deflection of the beam to identify the location of the damage, and the Lipschitz exponent is used to evaluate the damage degree. The influence of different damage degrees,multiple damage, different sensor locations, load velocity and load magnitude are studied.Besides, the feasibility of this method is verified by a model experiment.
Resumo:
Power line interference is one of the main problems in surface electromyogram signals (EMG) analysis. In this work, a new method based on the stationary wavelet packet transform is proposed to estimate and remove this kind of noise from EMG data records. The performance has been quantitatively evaluated with synthetic noisy signals, obtaining good results independently from the signal to noise ratio (SNR). For the analyzed cases, the obtained results show that the correlation coefficient is around 0.99, the energy respecting to the pure EMG signal is 98–104%, the SNR is between 16.64 and 20.40 dB and the mean absolute error (MAE) is in the range of −69.02 and −65.31 dB. It has been also applied on 18 real EMG signals, evaluating the percentage of energy respecting to the noisy signals. The proposed method adjusts the reduction level to the amplitude of each harmonic present in the analyzed noisy signals (synthetic and real), reducing the harmonics with no alteration of the desired signal.