957 resultados para Image features
Resumo:
This paper is concerned with choosing image features for image based visual servo control and how this choice influences the closed-loop dynamics of the system. In prior work, image features tend to be chosen on the basis of image processing simplicity and noise sensitivity. In this paper we show that the choice of feature directly influences the closed-loop dynamics in task-space. We focus on the depth axis control of a visual servo system and compare analytically various approaches that have been reported recently in the literature. The theoretical predictions are verified by experiment.
Resumo:
Affine covariant local image features are a powerful tool for many applications, including matching and calibrating wide baseline images. Local feature extractors that use a saliency map to locate features require adaptation processes in order to extract affine covariant features. The most effective extractors make use of the second moment matrix (SMM) to iteratively estimate the affine shape of local image regions. This paper shows that the Hessian matrix can be used to estimate local affine shape in a similar fashion to the SMM. The Hessian matrix requires significantly less computation effort than the SMM, allowing more efficient affine adaptation. Experimental results indicate that using the Hessian matrix in conjunction with a feature extractor that selects features in regions with high second order gradients delivers equivalent quality correspondences in less than 17% of the processing time, compared to the same extractor using the SMM.
Resumo:
When we actively explore the visual environment, our gaze preferentially selects regions characterized by high contrast and high density of edges, suggesting that the guidance of eye movements during visual exploration is driven to a significant degree by perceptual characteristics of a scene. Converging findings suggest that the selection of the visual target for the upcoming saccade critically depends on a covert shift of spatial attention. However, it is unclear whether attention selects the location of the next fixation uniquely on the basis of global scene structure or additionally on local perceptual information. To investigate the role of spatial attention in scene processing, we examined eye fixation patterns of patients with spatial neglect during unconstrained exploration of natural images and compared these to healthy and brain-injured control participants. We computed luminance, colour, contrast, and edge information contained in image patches surrounding each fixation and evaluated whether they differed from randomly selected image patches. At the global level, neglect patients showed the characteristic ipsilesional shift of the distribution of their fixations. At the local level, patients with neglect and control participants fixated image regions in ipsilesional space that were closely similar with respect to their local feature content. In contrast, when directing their gaze to contralesional (impaired) space neglect patients fixated regions of significantly higher local luminance and lower edge content than controls. These results suggest that intact spatial attention is necessary for the active sampling of local feature content during scene perception.
Resumo:
Falls are one of the greatest threats to elderly health in their daily living routines and activities. Therefore, it is very important to detect falls of an elderly in a timely and accurate manner, so that immediate response and proper care can be provided, by sending fall alarms to caregivers. Radar is an effective non-intrusive sensing modality which is well suited for this purpose, which can detect human motions in all types of environments, penetrate walls and fabrics, preserve privacy, and is insensitive to lighting conditions. Micro-Doppler features are utilized in radar signal corresponding to human body motions and gait to detect falls using a narrowband pulse-Doppler radar. Human motions cause time-varying Doppler signatures, which are analyzed using time-frequency representations and matching pursuit decomposition (MPD) for feature extraction and fall detection. The extracted features include MPD features and the principal components of the time-frequency signal representations. To analyze the sequential characteristics of typical falls, the extracted features are used for training and testing hidden Markov models (HMM) in different falling scenarios. Experimental results demonstrate that the proposed algorithm and method achieve fast and accurate fall detections. The risk of falls increases sharply when the elderly or patients try to exit beds. Thus, if a bed exit can be detected at an early stage of this motion, the related injuries can be prevented with a high probability. To detect bed exit for fall prevention, the trajectory of head movements is used for recognize such human motion. A head detector is trained using the histogram of oriented gradient (HOG) features of the head and shoulder areas from recorded bed exit images. A data association algorithm is applied on the head detection results to eliminate head detection false alarms. Then the three dimensional (3D) head trajectories are constructed by matching scale-invariant feature transform (SIFT) keypoints in the detected head areas from both the left and right stereo images. The extracted 3D head trajectories are used for training and testing an HMM based classifier for recognizing bed exit activities. The results of the classifier are presented and discussed in the thesis, which demonstrates the effectiveness of the proposed stereo vision based bed exit detection approach.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.
Resumo:
This paper considers the question of designing a fully image based visual servo control for a dynamic system. The work is motivated by the ongoing development of image based visual servo control of small aerial robotic vehicles. The observed targets considered are coloured blobs on a flat surface to which the normal direction is known. The theoretical framework is directly applicable to the case of markings on a horizontal floor or landing field. The image features used are a first order spherical moment for position and an image flow measurement for velocity. A fully non-linear adaptive control design is provided that ensures global stability of the closed-loop system. © 2005 IEEE.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
Local image feature extractors that select local maxima of the determinant of Hessian function have been shown to perform well and are widely used. This paper introduces the negative local minima of the determinant of Hessian function for local feature extraction. The properties and scale-space behaviour of these features are examined and found to be desirable for feature extraction. It is shown how this new feature type can be implemented along with the existing local maxima approach at negligible extra processing cost. Applications to affine covariant feature extraction and sub-pixel precise corner extraction are demonstrated. Experimental results indicate that the new corner detector is more robust to image blur and noise than existing methods. It is also accurate for a broader range of corner geometries. An affine covariant feature extractor is implemented by combining the minima of the determinant of Hessian with existing scale and shape adaptation methods. This extractor can be implemented along side the existing Hessian maxima extractor simply by finding both minima and maxima during the initial extraction stage. The minima features increase the number of correspondences by two to four fold. The additional minima features are very distinct from the maxima features in descriptor space and do not make the matching process more ambiguous.
Resumo:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression
Resumo:
We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
In recent years there is an apparent shift in research from content based image retrieval (CBIR) to automatic image annotation in order to bridge the gap between low level features and high level semantics of images. Automatic Image Annotation (AIA) techniques facilitate extraction of high level semantic concepts from images by machine learning techniques. Many AIA techniques use feature analysis as the first step to identify the objects in the image. However, the high dimensional image features make the performance of the system worse. This paper describes and evaluates an automatic image annotation framework which uses SURF descriptors to select right number of features and right features for annotation. The proposed framework uses a hybrid approach in which k-means clustering is used in the training phase and fuzzy K-NN classification in the annotation phase. The performance of the system is evaluated using standard metrics.
Resumo:
The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL) are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality
Resumo:
Abstract: Bisphosphonate-related osteonecrosis of the jaws (BRONJ) is characterized as exposed bone in the jaws for more than 8 weeks in patients with current or previous history of therapy with bisphosphonates (BPs) and no history of radiotherapy in the head and neck. We report a case series of 7 patients with BRONJ and analyze the variations of clinical and imaging signs, correlating them with the presence or absence of bone exposure. Among the patients, 6 were women and 1 was a man, aged 42–79 years. Five of the patients were using zoledronic acid and the other 2 alendronate. The use of BPs varied from 3 to 13 years. In 5 patients, tooth extraction was the triggering event of injuries. Panoramic radiographs and computed tomography (CT) were evaluated by a radiologist blinded to the cases. There were persistent unremodeled extraction socket even several months after tooth extraction in 3 of the cases that were consistent wit CT findings that also showed areas of osteosclerosis and osteolysis. Patients were treated according to the recommendations of the AAOMS, with surgical debridement and antibiotic coverage with amoxicillin in the symptomatic patients. The follow-up of these patients ranged from 8 to 34 months, with a good response to treatment. The image findings in this case series were not specific and showed no difference between each stages of BRONJ (AAOMS, 2009). The image features were similar in presence or absence of exposed bone.