109 resultados para quantization artifacts
Resumo:
For structured-light scanners, the projective geometry between a projector-camera pair is identical to that of a camera-camera pair. Consequently, in conjunction with calibration, a variety of geometric relations are available for three-dimensional Euclidean reconstruction. In this paper, we use projector-camera epipolar properties and the projective invariance of the cross-ratio to solve for 3D geometry. A key contribution of our approach is the use of homographies induced by reference planes, along with a calibrated camera, resulting in a simple parametric representation for projector and system calibration. Compared to existing solutions that require an elaborate calibration process, our method is simple while ensuring geometric consistency. Our formulation using the invariance of the cross-ratio is also extensible to multiple estimates of 3D geometry that can be analysed in a statistical sense. The performance of our system is demonstrated on some cultural artifacts and geometric surfaces.
Resumo:
In this paper, we present two new filtered backprojection (FBP) type algorithms for cylindrical detector helical cone-beam geometry with no position dependent backprojection weight. The algorithms are extension of the recent exact Hilbert filtering based 2D divergent beam reconstruction with no backprojection weight to the FDK type algorithm for reconstruction in 3D helical trajectory cone-beam tomography. The two algorithms named HFDK-W1 and HFDK-W2 result in better image quality, noise uniformity, lower noise and reduced cone-beam artifacts.
Resumo:
In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.
Resumo:
The neural network finds its application in many image denoising applications because of its inherent characteristics such as nonlinear mapping and self-adaptiveness. The design of filters largely depends on the a-priori knowledge about the type of noise. Due to this, standard filters are application and image specific. Widely used filtering algorithms reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated general approach to design a finite impulse response filter based on principal component neural network (PCNN) is proposed in this study for image filtering, optimized in the sense of visual inspection and error metric. This algorithm exploits the inter-pixel correlation by iteratively updating the filter coefficients using PCNN. This algorithm performs optimal smoothing of the noisy image by preserving high and low frequency features. Evaluation results show that the proposed filter is robust under various noise distributions. Further, the number of unknown parameters is very few and most of these parameters are adaptively obtained from the processed image.
Resumo:
The queenless ponerine ant Diacamma ceylonense and a population of Diacamma from the Nilgiri hills which we refer to as `nilgiri', exhibit interesting similarities as well as dissimilarities. Molecular phylogenetic study of these morphologically almost similar taxa has shown that D ceylonense is closely related to `nilgiri' and indicates that `nilgiri' is a recent diversion in the Diacamma phylogenetic tree. However, there is a striking behavioural difference in the way reproductive monopoly is maintained by the respective gamergates (mated egg laying workers), and there is evidence that they are genetically differentiated, suggesting a lack of gene flow To develop a better understanding of the mechanism involved in speciation of Diacamma, we have analysed karyotypes of D. ceylonense and `nilgiri' In both, we found surprising inter-individual and intra-individual karyotypic mosaicism. The observed numerical variability, both at intra-individual and inter-individual levels, does not appear to have hampered the sustainability of the chromosomal diversity in each population under study Since the related D. indicum, displays no such intra-individual or inter-Individual variability whatsoever under identical experimental conditions, these results are unlikely to he artifacts. Although no known mechanisms can account for the observed karyotypic variability of this nature, we believe that the present findings on the ants under study would provide opportunities for exciting new discoveries concerning the origin, maintenance and significance of intra-individual and inter-individual karyotypic mosaicism.
Resumo:
This paper describes a novel mimetic technique of using frequency domain approach and digital filters for automatic generation of EEG reports. Digitized EEG data files, transported on a cartridge, have been used for the analysis. The signals are filtered for alpha, beta, theta and delta bands with digital bandpass filters of fourth-order, cascaded, Butterworth, infinite impulse response (IIR) type. The maximum amplitude, mean frequency, continuity index and degree of asymmetry have been computed for a given EEG frequency band. Finally, searches for the presence of artifacts (eye movement or muscle artifacts) in the EEG records have been made.
Resumo:
A simple, non-iterative method for component wave delineation from the electrocardiogram (ECG) is derived by modelling its discrete cosine transform (DCT) as a sum of damped cosinusoids. Amplitude, phase, damping factor and frequency parameters of each of the cosinusoids are estimated by the extended Prony method. Different component waves are represented by non-overlapping clusters of model poles in the z plane and thus a component wave is derived by the addition of the inverse transformed (IDCT) impulse responses of the poles in the cluster. Akaike's information criterion (AIC) is used to determine the model order. The method performed satisfactory even in the presence of artifacts. The efficacy of the method is illustrated by analysis of continuous strips of ECG data.
Resumo:
We calculate the kaon B parameter in quenched lattice QCD at beta=6.0 using Wilson fermions at kappa=0.154 and 0.155. We use two kinds of nonlocal (''smeared'') sources for quark propagators to calculate the matrix elements between states of definite momentum. The use of smeared sources yields results with much smaller errors than obtained in previous calculations with Wilson fermions. By combining results for p=(0,0,0) and p=(0,0,1), we show that one can carry out the noperturbative subtraction necessary to remove the dominant lattice artifacts induced by the chiral-symmetry-breaking term in the Wilson action. Our final results are in good agreement with those obtained using staggered fermions. We also present results for B parameters of the DELTAI = 3/2 part of the electromagnetic penguin operators, and preliminary results for B(K) in the presence of two flavors of dynamical quarks.
Resumo:
We investigate the Nernst effect in a mesoscopic two-dimensional electron system (2DES) at low magnetic fields, before the onset of Landau level quantization. The overall magnitude of the Nernst signal agrees well with semiclassical predictions. We observe reproducible mesoscopic fluctuations in the signal that diminish significantly with an increase in temperature. We also show that the Nernst effect exhibits an anomalous component that is correlated with an oscillatory Hall effect. This behavior may be able to distinguish between different spin-correlated states in the 2DES.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
High sensitivity detection techniques are required for indoor navigation using Global Navigation Satellite System (GNSS) receivers, and typically, a combination of coherent and non- coherent integration is used as the test statistic for detection. The coherent integration exploits the deterministic part of the signal and is limited due to the residual frequency error, navigation data bits and user dynamics, which are not known apriori. So, non- coherent integration, which involves squaring of the coherent integration output, is used to improve the detection sensitivity. Due to this squaring, it is robust against the artifacts introduced due to data bits and/or frequency error. However, it is susceptible to uncertainty in the noise variance, and this can lead to fundamental sensitivity limits in detecting weak signals. In this work, the performance of the conventional non-coherent integration-based GNSS signal detection is studied in the presence of noise uncertainty. It is shown that the performance of the current state of the art GNSS receivers is close to the theoretical SNR limit for reliable detection at moderate levels of noise uncertainty. Alternate robust post-coherent detectors are also analyzed, and are shown to alleviate the noise uncertainty problem. Monte-Carlo simulations are used to confirm the theoretical predictions.
Resumo:
We present experimental investigation of a new reconstruction method for off-axis digital holographic microscopy (DHM). This method effectively suppresses the object auto-correlation, commonly called the zero-order term, from holographic measurements, thereby suppressing the artifacts generated by the intensities of the two beams employed for interference from complex wavefield reconstruction. The algorithm is based on non-linear filtering, and can be applied to standard DHM setups, with realistic recording conditions. We study the applicability of the technique under different experimental configurations, such as topographic images of microscopic specimens or speckle holograms.
Resumo:
In engineering design, the end goal is the creation of an artifact, product, system, or process that fulfills some functional requirements at some desired level of performance. As such, knowledge of functionality is essential in a wide variety of tasks in engineering activities, including modeling, generation, modification, visualization, explanation, evaluation, diagnosis, and repair of these artifacts and processes. A formal representation of functionality is essential for supporting any of these activities on computers. The goal of Parts 1 and 2 of this Special Issue is to bring together the state of knowledge of representing functionality in engineering applications from both the engineering and the artificial intelligence (AI) research communities.
Resumo:
As with 1,2-diphenylethane (dpe), X-ray crystallographic methods measure the central bond in meso-3,4-diphenylhexane-2,5-done (dphd) as significantly shorter than normal for an sp(3)-sp(3) bond. The same methods measure the benzylic (ethane C-Ph) bonds in dphd as unusually long for sp(3)-sp(2) liaisons. Torsional motions of the phenyl rings about the C-Ph bonds have been proposed as the artifacts behind the result of a 'short' central bond in dpe. While a similar explanation can, presumably, hold for the even 'shorter' central bond in dphd, it cannot account for the 'long' C-Ph bonds. The phenyl groups, departing much from regular hexagonal shape, adopt highly skewed conformations with respect to the plane constituted by the four central atoms. It is thought that-the thermal motions of the phenyl rings, conditioned by the potential wells in which they are ensconced in the unit cell, are largely libratory around their normal axes. In what appears to be a straightforward explanation under the 'rigid-body' concept, it appears that these libratory motions of the phenyl rings, that account, at the same time, for the 'short' central bond, are the artifacts behind the 'long' measurement of the C-Ph bonds. These motions could be superimposed on torsional motions analogous to those proposed in the case of dpe. An inspection of the ORTEP diagram from the 298 K data on dphd clearly suggests these possibilities. Supportive evidence for these qualitative explanations from an analysis of the differences between the mean square displacements of C(1) and C(7)/C(1a) and C(7a) based on the 'rigid-body model' is discussed. (C) 2002 Elsevier Science B.V. All rights reserved.