455 resultados para sViewpoint Invariant Detection
em Queensland University of Technology - ePrints Archive
Resumo:
Highly sensitive infrared (IR) cameras provide high-resolution diagnostic images of the temperature and vascular changes of breasts. These images can be processed to emphasize hot spots that exhibit early and subtle changes owing to pathology. The resulting images show clusters that appear random in shape and spatial distribution but carry class dependent information in shape and texture. Automated pattern recognition techniques are challenged because of changes in location, size and orientation of these clusters. Higher order spectral invariant features provide robustness to such transformations and are suited for texture and shape dependent information extraction from noisy images. In this work, the effectiveness of bispectral invariant features in diagnostic classification of breast thermal images into malignant, benign and normal classes is evaluated and a phase-only variant of these features is proposed. High resolution IR images of breasts, captured with measuring accuracy of ±0.4% (full scale) and temperature resolution of 0.1 °C black body, depicting malignant, benign and normal pathologies are used in this study. Breast images are registered using their lower boundaries, automatically extracted using landmark points whose locations are learned during training. Boundaries are extracted using Canny edge detection and elimination of inner edges. Breast images are then segmented using fuzzy c-means clustering and the hottest regions are selected for feature extraction. Bispectral invariant features are extracted from Radon projections of these images. An Adaboost classifier is used to select and fuse the best features during training and then classify unseen test images into malignant, benign and normal classes. A data set comprising 9 malignant, 12 benign and 11 normal cases is used for evaluation of performance. Malignant cases are detected with 95% accuracy. A variant of the features using the normalized bispectrum, which discards all magnitude information, is shown to perform better for classification between benign and normal cases, with 83% accuracy compared to 66% for the original.
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
Stationary processes are random variables whose value is a signal and whose distribution is invariant to translation in the domain of the signal. They are intimately connected to convolution, and therefore to the Fourier transform, since the covariance matrix of a stationary process is a Toeplitz matrix, and Toeplitz matrices are the expression of convolution as a linear operator. This thesis utilises this connection in the study of i) efficient training algorithms for object detection and ii) trajectory-based non-rigid structure-from-motion.