871 resultados para Multi-modality medical images


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new approach to recognition of images using invariant features based on higher-order spectra is presented. Higher-order spectra are translation invariant because translation produces linear phase shifts which cancel. Scale and amplification invariance are satisfied by the phase of the integral of a higher-order spectrum along a radial line in higher-order frequency space because the contour of integration maps onto itself and both the real and imaginary parts are affected equally by the transformation. Rotation invariance is introduced by deriving invariants from the Radon transform of the image and using the cyclic-shift invariance property of the discrete Fourier transform magnitude. Results on synthetic and actual images show isolated, compact clusters in feature space and high classification accuracies

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Features derived from the trispectra of DFT magnitude slices are used for multi-font digit recognition. These features are insensitive to translation, rotation, or scaling of the input. They are also robust to noise. Classification accuracy tests were conducted on a common data base of 256× 256 pixel bilevel images of digits in 9 fonts. Randomly rotated and translated noisy versions were used for training and testing. The results indicate that the trispectral features are better than moment invariants and affine moment invariants. They achieve a classification accuracy of 95% compared to about 81% for Hu's (1962) moment invariants and 39% for the Flusser and Suk (1994) affine moment invariants on the same data in the presence of 1% impulse noise using a 1-NN classifier. For comparison, a multilayer perceptron with no normalization for rotations and translations yields 34% accuracy on 16× 16 pixel low-pass filtered and decimated versions of the same data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface macrotexture is identified as one of the factors contributing to the surface's skid resistance. Existing methods of quantifying the surface macrotexture, such as the sand patch test and the laser profilometer test, are either expensive or intrusive, requiring traffic control. High-resolution cameras have made it possible to acquire good quality images from roads for the automated analysis of texture depth. In this paper, a granulometric method based on image processing is proposed to estimate road surface texture coarseness distribution from their edge profiles. More than 1300 images were acquired from two different sites, extending to a total of 2.96 km. The images were acquired using camera orientations of 60 and 90 degrees. The road surface is modeled as a texture of particles, and the size distribution of these particles is obtained from chord lengths across edge boundaries. The mean size from each distribution is compared with the sensor measured texture depth obtained using a laser profilometer. By tuning the edge detector parameters, a coefficient of determination of up to R2 = 0.94 between the proposed method and the laser profilometer method was obtained. The high correlation is also confirmed by robust calibration parameters that enable the method to be used for unseen data after the method has been calibrated over road surface data with similar surface characteristics and under similar imaging conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification