79 resultados para complex wavelet transform

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of synthetic aperture radar interferometric phase noise reduction is addressed. A new technique based on discrete wavelet transforms is presented. This technique guarantees high resolution phase estimation without using phase image segmentation. Areas containing only noise are hardly processed. Tests with synthetic and real interferograms are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous wavelet transform is obtained as a maximumentropy solution of the corresponding inverse problem. It is well knownthat although a signal can be reconstructed from its wavelet transform,the expansion is not unique due to the redundancy of continuous wavelets.Hence, the inverse problem has no unique solution. If we want to recognizeone solution as "optimal", then an appropriate decision criterion hasto be adopted. We show here that the continuous wavelet transform is an"optimal" solution in a maximum entropy sense.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A discussion on the expression proposed in [1]–[3]for deconvolving the wideband density function is presented. Weprove here that such an expression reduces to be proportionalto the wideband correlation receiver output, or continuous wavelettransform of the received signal with respect to the transmittedone. Moreover, we show that the same result has been implicitlyassumed in [1], when the deconvolution equation is derived. Westress the fact that the analyzed approach is just the orthogonalprojection of the density function onto the image of the wavelettransform with respect to the transmitted signal. Consequently,the approach can be considered a good representation of thedensity function only under the prior knowledge that the densityfunction belongs to such a subspace. The choice of the transmittedsignal is thus crucial to this approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the ¿a trous¿ algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSATTM images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a novel high capacity robust audio watermarking algorithm by using the high frequency band of the wavelet decomposition at which the human auditory system (HAS) is not very sensitive to alteration. The main idea is to divide the high frequency band into frames and, for embedding, to change the wavelet samples depending on the average of relevant frame¿s samples. The experimental results show that the method has a very high capacity (about 11,000 bps), without significant perceptual distortion (ODG in [¿1 ,0] and SNR about 30dB), and provides robustness against common audio signal processing such as additive noise, filtering, echo and MPEG compression (MP3).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

JPEG 2000 és un estàndard de compressió d'imatges que utilitza tècniques estat de l’art basades en la transformada wavelet. Els principals avantatges són la millor compressió, la possibilitat d’operar amb dades comprimides i que es pot comprimir amb i sense pèrdua amb el mateix mètode. BOI és la implementació de JPEG 2000 del Grup de Compressió Interactiva d’Imatges del departament d’Enginyeria de la Informació i les Comunicacions, pensada per entendre, criticar i millorar les tecnologies de JPEG 2000. La nova versió intenta arribar a tots els extrems de l’estàndard on la versió anterior no va arribar.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

JPEG2000 és un estàndard de compressió d’imatges que utilitza la transformada wavelet i, posteriorment, una quantificació uniforme dels coeficients amb dead-zone. Els coeficients wavelet presenten certes dependències tant estadístiques com visuals. Les dependències estadístiques es tenen en compte a l'esquema JPEG2000, no obstant, no passa el mateix amb les dependències visuals. En aquest treball, es pretén trobar una representació més adaptada al sistema visual que la que proporciona JPEG2000 directament. Per trobar-la utilitzarem la normalització divisiva dels coeficients, tècnica que ja ha demostrat resultats tant en decorrelació estadística de coeficients com perceptiva. Idealment, el que es voldria fer és reconvertir els coeficients a un espai de valors en els quals un valor més elevat dels coeficients impliqui un valor més elevat d'aportació visual, i utilitzar aquest espai de valors per a codificar. A la pràctica, però, volem que el nostre sistema de codificació estigui integrat a un estàndard. És per això que utilitzarem JPEG2000, estàndard de la ITU que permet una elecció de les distorsions en la codificació, i utilitzarem la distorsió en el domini de coeficients normalitzats com a mesura de distorsió per a escollir quines dades s'envien abans.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

VariScan is a software package for the analysis of DNA sequence polymorphisms at the whole genome scale. Among other features, the software:(1) can conduct many population genetic analyses; (2) incorporates a multiresolution wavelet transform-based method that allows capturing relevant information from DNA polymorphism data; and (3) it facilitates the visualization of the results in the most commonly used genome browsers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method to display full complex Fresnel holograms by adding the information displayed on two analogue ferroelectric liquid crystal spatial light modulators. One of them works in real-only configuration and the other in imaginary-only mode. The Fresnel holograms are computed by backpropagating an object at a selected distance with the Fresnel transform. Then, displaying the real and imaginary parts on each panel, the object is reconstructed at that distance from the modulators by simple propagation of light. We present simulation results taking into account the specifications of the modulators as well as optical results. We have also studied the quality of reconstructions using only real, imaginary, amplitude or phase information. Although the real and imaginary reconstructions look acceptable for certain distances, full complex reconstruction is always better and is required when arbitrary distances are used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Production of desirable outputs is often accompanied by undesirable by products that have damaging effects on the environment, and whose disposal is frequently regulated by public authorities. In this paper, we compute directional technology distance functions under particular assumptions concerning disposability of bads in order to test for the existence of what we call ‘complex situations’, where the biggest producer is not the greatest polluter. Furthermore, we show that how in such situations, environmental regulation could achieve an effective reduction in the aggregate level of bad outputs without reducing the production of good outputs. Finally, we illustrate our methodology with an empirical application to a sample of Spanish tile ceramic producers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Economies are open complex adaptive systems far from thermodynamic equilibrium, and neo-classical environmental economics seems not to be the best way to describe the behaviour of such systems. Standard econometric analysis (i.e. time series) takes a deterministic and predictive approach, which encourages the search for predictive policy to ‘correct’ environmental problems. Rather, it seems that, because of the characteristics of economic systems, an ex-post analysis is more appropriate, which describes the emergence of such systems’ properties, and which sees policy as a social steering mechanism. With this background, some of the recent empirical work published in the field of ecological economics that follows the approach defended here is presented. Finally, the conclusion is reached that a predictive use of econometrics (i.e. time series analysis) in ecological economics should be limited to cases in which uncertainty decreases, which is not the normal situation when analysing the evolution of economic systems. However, that does not mean we should not use empirical analysis. On the contrary, this is to be encouraged, but from a structural and ex-post point of view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt."