931 resultados para Signal processing -- Digital techniques
Resumo:
Chaotic signals have been considered potentially attractive in many signal processing applications ranging from wideband communication systems to cryptography and watermarking. Besides, some devices as nonlinear adaptive filters and phase-locked loops can present chaotic behavior. In this paper, we derive analytical expressions for the autocorrelation sequence, power spectral density and essential bandwidth of chaotic signals generated by the skew tent map. From these results, we suggest possible applications in communication systems. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Phase-locked loops (PLLs) are widely used in applications related to control systems and telecommunication networks. Here we show that a single-chain master-slave network of third-order PLLs can exhibit stationary, periodic and chaotic behaviors, when the value of a single parameter is varied. Hopf, period-doubling and saddle-saddle bifurcations are found. Chaos appears in dissipative and non-dissipative conditions. Thus, chaotic behaviors with distinct dynamical features can be generated. A way of encoding binary messages using such a chaos-based communication system is suggested. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We use networks composed of three phase-locked loops (PLLs), where one of them is the master, for recognizing noisy images. The values of the coupling weights among the PLLs control the noise level which does not affect the successful identification of the input image. Analytical results and numerical tests are presented concerning the scheme performance. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
An important topic in genomic sequence analysis is the identification of protein coding regions. In this context, several coding DNA model-independent methods based on the occurrence of specific patterns of nucleotides at coding regions have been proposed. Nonetheless, these methods have not been completely suitable due to their dependence on an empirically predefined window length required for a local analysis of a DNA region. We introduce a method based on a modified Gabor-wavelet transform (MGWT) for the identification of protein coding regions. This novel transform is tuned to analyze periodic signal components and presents the advantage of being independent of the window length. We compared the performance of the MGWT with other methods by using eukaryote data sets. The results show that MGWT outperforms all assessed model-independent methods with respect to identification accuracy. These results indicate that the source of at least part of the identification errors produced by the previous methods is the fixed working scale. The new method not only avoids this source of errors but also makes a tool available for detailed exploration of the nucleotide occurrence.
Resumo:
In high-velocity open channel flows, the measurements of air-water flow properties are complicated by the strong interactions between the flow turbulence and the entrained air. In the present study, an advanced signal processing of traditional single- and dual-tip conductivity probe signals is developed to provide further details on the air-water turbulent level, time and length scales. The technique is applied to turbulent open channel flows on a stepped chute conducted in a large-size facility with flow Reynolds numbers ranging from 3.8 E+5 to 7.1 E+5. The air water flow properties presented some basic characteristics that were qualitatively and quantitatively similar to previous skimming flow studies. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level at a macroscopic scale, and the auto- and cross-correlation functions at the microscopic level. New correlation analyses yielded a characterisation of the large eddies advecting the bubbles. Basic results included the integral turbulent length and time scales. The turbulent length scales characterised some measure of the size of large vortical structures advecting air bubbles in the skimming flows, and the data were closely related to the characteristic air-water depth Y90. In the spray region, present results highlighted the existence of an upper spray region for C > 0.95 to 0.97 in which the distributions of droplet chord sizes and integral advection scales presented some marked differences with the rest of the flow.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
Skimming flows on stepped spillways are characterised by a significant rate of turbulent dissipation on the chute. Herein an advanced signal processing of traditional conductivity probe signals is developed to provide further details on the turbulent time and length scales. The technique is applied to a 22° stepped chute operating with flow Reynolds numbers between 3.8 and 7.1 E+5. The new correlation analyses yielded a characterisation of large eddies advecting the bubbles. The turbulent length scales were related to the characteristic depth Y90. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level, and turbulence time and length scales. The self-similarity results were significant because they provided a picture general enough to be used to characterise the air-water flow field in prototype spillways.
Resumo:
In high-velocity free-surface flows, air is continuously being trapped and released through the free-surface. Such high-velocity highly-aerated flows cannot be studied numerically because of the large number of relevant equations and parameters. Herein an advanced signal processing of traditional single- and dual-tip conductivity probes provides some new information on the air-water turbulent time and length scales. The technique is applied to turbulent open channel flows in a large-size facility. The auto- and cross-correlation analyses yield some characterisation of the large eddies advecting the bubbles. The transverse integral turbulent length and time scales are related to the step height: i.e., Lxy/h ~ 0.02 to 0.2, and T.sqrt(g/h) ~ 0.004 to 0.04. The results are irrespective of the Reynolds numbers. The present findings emphasise that turbulent dissipation by large-scale vortices is a significant process in the intermediate zone between the spray and bubbly flow regions (0.3 < C < 0.7). Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. The results are significant because they provide a picture general enough to be used to characterise the air-water flow field in prototype spillways.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
The paper discusses the bistatic radar parameters for the case when the transmitter is a satellite emitting communication signals. The model utilises signals from an Iridium-like low earth orbiting satellite system. The maximum detection range, when thermal noise-limited, is discussed at the theoretical level and these results are compared with experimentation. Satellite-radar signal levels and the power of ground reflections are evaluated.
Resumo:
Spaceborne/airborne synthetic aperture radar (SAR) systems provide high resolution two-dimensional terrain imagery. The paper proposes a technique for combining multiple SAR images, acquired on flight paths slightly separated in the elevation direction, to generate high resolution three-dimensional imagery. The technique could be viewed as an extension to interferometric SAR (InSAR) in that it generates topographic imagery with an additional dimension of resolution. The 3-D multi-pass SAR imaging system is typically characterised by a relatively short ambiguity length in the elevation direction. To minimise the associated ambiguities we exploit the relative phase information within the set of images to track the terrain landscape. The SAR images are then coherently combined, via a nonuniform DFT, over a narrow (in elevation) volume centred on the 'dominant' terrain ground plane. The paper includes a detailed description of the technique, background theory, including achievable resolution, and the results of an experimental study.
Design of improved rail-to-rail low-distortion and low-stress switches in advanced CMOS technologies
Resumo:
This paper describes the efficient design of an improved and dedicated switched-capacitor (SC) circuit capable of linearizing CMOS switches to allow SC circuits to reach low distortion levels. The described circuit (SC linearization control circuit, SLC) has the advantage over conventional clock-bootstrapping circuits of exhibiting low-stress, since large gate voltages are avoided. This paper presents exhaustive corner simulation results of a SC sample-and-hold (S/H) circuit which employs the proposed and optimized circuits, together with the experimental evaluation of a complete 10-bit ADC utilizing the referred S/H circuit. These results show that the SLC circuits can reduce distortion and increase dynamic linearity above 12 bits for wide input signal bandwidths.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.