939 resultados para quantization artifacts
Resumo:
The use of bit-level systolic arrays in the design of a vector quantized transformed subband coding system for speech signals is described. It is shown how the major components of this system can be decomposed into a small number of highly regular building blocks that interface directly to one another. These include circuits for the computation of the discrete cosine transform, the inverse discrete cosine transform, and vector quantization codebook search.
Resumo:
The real time implementation of an efficient signal compression technique, Vector Quantization (VQ), is of great importance to many digital signal coding applications. In this paper, we describe a new family of bit level systolic VLSI architectures which offer an attractive solution to this problem. These architectures are based on a bit serial, word parallel approach and high performance and efficiency can be achieved for VQ applications of a wide range of bandwidths. Compared with their bit parallel counterparts, these bit serial circuits provide better alternatives for VQ implementations in terms of performance and cost. © 1995 Kluwer Academic Publishers.
Resumo:
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Resumo:
Polyphase IIR structures have recently proven themselves very attractive for very high performance filters that can be designed using very few coefficients. This, combined with their low sensitivity to coefficient quantization in comparison to standard FIR and IIR structures, makes them very applicable for very fast filtering when implemented in fixed-point arithmetic. However, although the mathematical description is very simple, there exist a number of ways to implement such filters. In this paper, we take four of these different implementation structures, analyze the rounding noise originating from the limited arithmetic wordlength of the mathematical operators, and check the internal data growth within the structure. These analyses need to be done to ensure that the performance of the implementation matches the performance of the theoretical design. The theoretical approach that we present has been proven by the results of the fixed-point simulation done in Simulink and verified by an equivalent bit-true implementation in VHDL.
Resumo:
The enhanced functional sensitivity offered by ultra-high field imaging may significantly benefit simultaneous EEG-fMRI studies, but the concurrent increases in artifact contamination can strongly compromise EEG data quality. In the present study, we focus on EEG artifacts created by head motion in the static B0 field. A novel approach for motion artifact detection is proposed, based on a simple modification of a commercial EEG cap, in which four electrodes are non-permanently adapted to record only magnetic induction effects. Simultaneous EEG-fMRI data were acquired with this setup, at 7T, from healthy volunteers undergoing a reversing-checkerboard visual stimulation paradigm. Data analysis assisted by the motion sensors revealed that, after gradient artifact correction, EEG signal variance was largely dominated by pulse artifacts (81-93%), but contributions from spontaneous motion (4-13%) were still comparable to or even larger than those of actual neuronal activity (3-9%). Multiple approaches were tested to determine the most effective procedure for denoising EEG data incorporating motion sensor information. Optimal results were obtained by applying an initial pulse artifact correction step (AAS-based), followed by motion artifact correction (based on the motion sensors) and ICA denoising. On average, motion artifact correction (after AAS) yielded a 61% reduction in signal power and a 62% increase in VEP trial-by-trial consistency. Combined with ICA, these improvements rose to a 74% power reduction and an 86% increase in trial consistency. Overall, the improvements achieved were well appreciable at single-subject and single-trial levels, and set an encouraging quality mark for simultaneous EEG-fMRI at ultra-high field.
Resumo:
Present study consists the quantization of specific metals-- Cr, Cd, Pb, Zn and Cu observed in the experimental bivalve, Villorita species. Bivalve specimens were collected seasonally from the identified three hot spots of Vembanad Lake. Soft tissue concentrations of metals are very sensitive in reflecting changes in the ambient environment and hence important in assessing the environmental quality. Concentrations of Zn in bivalves were fairly high compared to other metals. All the stations showed a maximum concentration during premonsoon and minimum during the other two seasons. Levels of Pb, Cu, Zn, Cd and Cr are between 0-6.17mg/kg, 0-17.224mg/kg, 1.916-255.163mg/kg, 0.325-4.133mg/kg, and 0-15.233mg/kg respectively
Resumo:
We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.
Resumo:
How does the manipulation of visual representations play a role in the practices of generating, evolving and exchanging knowledge? The role of visual representation in mediating knowledge work is explored in a study of design work of an architectural practice, Edward Cullinan Architects. The intensity of interactions with visual representations in the everyday activities on design projects is immediately striking. Through a discussion of observed design episodes, two ways are articulated in which visual representations act as 'artefacts of knowing'. As communication media they are symbolic representations, rich in meaning, through which ideas are articulated, developed and exchanged. Furthermore, as tangible artefacts they constitute material entities with which to interact and thereby develop knowledge. The communicative and interactive properties of visual representations constitute them as central elements of knowledge work. The paper explores emblematic knowledge practices supported by visual representation and concludes by pinpointing avenues for further research.
Resumo:
A Fractal Quantizer is proposed that replaces the expensive division operation for the computation of scalar quantization by more modest and available multiplication, addition and shift operations. Although the proposed method is iterative in nature, simulations prove a virtually undetectable distortion to the naked eve for JPEG compressed images using a single iteration. The method requires a change to the usual tables used in JPEG algorithins but of similar size. For practical purposes, performing quantization is reduced to a multiplication plus addition operation easily programmed in either low-end embedded processors and suitable for efficient and very high speed implementation in ASIC or FPGA hardware. FPGA hardware implementation shows up to x15 area-time savingscompared to standars solutions for devices with dedicated multipliers. The method can be also immediately extended to perform adaptive quantization(1).