888 resultados para Database application, Biologia cellulare, Image retrieval
Resumo:
3D Face Recognition is an active area of research for past several years. For a 3D face recognition system one would like to have an accurate as well as low cost setup for constructing 3D face model. In this paper, we use Profilometry approach to obtain a 3D face model.This method gives a low cost solution to the problem of acquiring 3D data and the 3D face models generated by this method are sufficiently accurate. We also develop an algorithm that can use the 3D face model generated by the above method for the recognition purpose.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
Image and video filtering is a key image-processing task in computer vision especially in noisy environment. In most of the cases the noise source is unknown and hence possess a major difficulty in the filtering operation. In this paper we present an error-correction based learning approach for iterative filtering. A new FIR filter is designed in which the filter coefficients are updated based on Widrow-Hoff rule. Unlike the standard filter the proposed filter has the ability to remove noise without the a priori knowledge of the noise. Experimental result shows that the proposed filter efficiently removes the noise and preserves the edges in the image. We demonstrate the capability of the proposed algorithm by testing it on standard images infected by Gaussian noise and on a real time video containing inherent noise. Experimental result shows that the proposed filter is better than some of the existing standard filters
Resumo:
We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
This paper describes the efforts at MILE lab, IISc, to create a 100,000-word database each in Kannada and Tamil for the design and development of Online Handwritten Recognition. It has been collected from over 600 users in order to capture the variations in writing style. We describe features of the scripts and how the number of symbols were reduced to be able to effectively train the data for recognition. The list of words include all the characters, Kannada and Indo-Arabic numerals, punctuations and other symbols. A semi-automated tool for the annotation of data from stroke to word level is used. It segments each word into stroke groups and also acts as a validation mechanism for segmentation. The tool displays the stroke, stroke groups and aksharas of a word and hence can be used to study the various styles of writing, delayed strokes and for assigning quality tags to the words. The tool is currently being used for annotating Tamil and Kannada data. The output is stored in a standard XML format.
Resumo:
The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.
Resumo:
Editors' note:Flexible, large-area display and sensor arrays are finding growing applications in multimedia and future smart homes. This article first analyzes and compares current flexible devices, then discusses the implementation, requirements, and testing of flexible sensor arrays.—Jiun-Lang Huang (National Taiwan University) and Kwang-Ting (Tim) Cheng (University of California, Santa Barbara)
Resumo:
Fusion of multi-sensor imaging data enables a synergetic interpretation of complementary information obtained by sensors of different spectral ranges. Multi-sensor data of diverse spectral, spatial and temporal resolutions require advanced numerical techniques for analysis and interpretation. This paper reviews ten advanced pixel based image fusion techniques – Component substitution (COS), Local mean and variance matching, Modified IHS (Intensity Hue Saturation), Fast Fourier Transformed-enhanced IHS, Laplacian Pyramid, Local regression, Smoothing filter (SF), Sparkle, SVHC and Synthetic Variable Ratio. The above techniques were tested on IKONOS data (Panchromatic band at 1 m spatial resolution and Multispectral 4 bands at 4 m spatial resolution). Evaluation of the fused results through various accuracy measures, revealed that SF and COS methods produce images closest to corresponding multi-sensor would observe at the highest resolution level (1 m).
Resumo:
This report describes some preliminary experiments on the use of the relaxation technique for the reconstruction of the elements of a matrix given their various directional sums (or projections).
Resumo:
This paper presents the preliminary analysis of Kannada WordNet and the set of relevant computational tools. Although the design has been inspired by the famous English WordNet, and to certain extent, by the Hindi WordNet, the unique features of Kannada WordNet are graded antonyms and meronymy relationships, nominal as well as verbal compoundings, complex verb constructions and efficient underlying database design (designed to handle storage and display of Kannada unicode characters). Kannada WordNet would not only add to the sparse collection of machine-readable Kannada dictionaries, but also will give new insights into the Kannada vocabulary. It provides sufficient interface for applications involved in Kannada machine translation, spell checker and semantic analyser.
Resumo:
Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques
Resumo:
Conventional encryption techniques are usually applicable for text data and often unsuited for encrypting multimedia objects for two reasons. Firstly, the huge sizes associated with multimedia objects make conventional encryption computationally costly. Secondly, multimedia objects come with massive redundancies which are useful in avoiding encryption of the objects in their entirety. Hence a class of encryption techniques devoted to encrypting multimedia objects like images have been developed. These techniques make use of the fact that the data comprising multimedia objects like images could in general be seggregated into two disjoint components, namely salient and non-salient. While the former component contributes to the perceptual quality of the object, the latter only adds minor details to it. In the context of images, the salient component is often much smaller in size than the non-salient component. Encryption effort is considerably reduced if only the salient component is encrypted while leaving the other component unencrypted. A key challenge is to find means to achieve a desirable seggregation so that the unencrypted component does not reveal any information about the object itself. In this study, an image encryption approach that uses fractal structures known as space-filling curves- in order to reduce the encryption overload is presented. In addition, the approach also enables a high quality lossy compression of images.
Resumo:
Preferential accumulation and agglomeration kinetics of nanoparticles suspended in an acoustically levitated water droplet under radiative heating has been studied. Particle image velocimetry performed to map the internal flow field shows a single cell recirculation with increasing strength for decreasing viscosities. Infrared thermography and high speed imaging show details of the heating process for various concentrations of nanosilica droplets. Initial stage of heating is marked by fast vaporization of liquid and sharp temperature rise. Following this stage, aggregation of nanoparticles is seen resulting in various structure formations. At low concentrations, a bowl structure of the droplet is dominant, maintained at a constant temperature. At high concentrations, viscosity of the solution increases, leading to rotation about the levitator axis due to the dominance of centrifugal motion. Such complex fluid motion inside the droplet due to acoustic streaming eventually results in the formation of a ring structure. This horizontal ring eventually reorients itself due to an imbalance of acoustic forces on the ring, exposing larger area for laser absorption and subsequent sharp temperature rise.
Resumo:
Existing approches to digital halftoning of image are based primarily on thresholding. We propose a general framework fot image halftoning whcrc some function uf the output halftone tracks another function of the input gray-tone.This appcoach is shown lo unify most existing algorithms and to provide useful insights. Further, the new intcrpretation allows us to remedy problems in existing aigorithrms such as the error dlffusion, and sohsequently to achieve halftones haavmg superior quality. The proposed method is very general nature is an advantage since it offers a wide choice of three Cilters and a update rule. An intercstmg product of this framework is that equally good, or better, half-tones are possible ro be obtained by thresholding a noise proccess instead of the image itself.