395 resultados para Binary Image Representation
Resumo:
Efficient and effective feature detection and representation is an important consideration when processing videos, and a large number of applications such as motion analysis, 3D scene understanding, tracking etc. depend on this. Amongst several feature description methods, local features are becoming increasingly popular for representing videos because of their simplicity and efficiency. While they achieve state-of-the-art performance with low computational complexity, their performance is still too limited for real world applications. Furthermore, rapid increases in the uptake of mobile devices has increased the demand for algorithms that can run with reduced memory and computational requirements. In this paper we propose a semi binary based feature detectordescriptor based on the BRISK detector, which can detect and represent videos with significantly reduced computational requirements, while achieving comparable performance to the state of the art spatio-temporal feature descriptors. First, the BRISK feature detector is applied on a frame by frame basis to detect interest points, then the detected key points are compared against consecutive frames for significant motion. Key points with significant motion are encoded with the BRISK descriptor in the spatial domain and Motion Boundary Histogram in the temporal domain. This descriptor is not only lightweight but also has lower memory requirements because of the binary nature of the BRISK descriptor, allowing the possibility of applications using hand held devices.We evaluate the combination of detectordescriptor performance in the context of action classification with a standard, popular bag-of-features with SVM framework. Experiments are carried out on two popular datasets with varying complexity and we demonstrate comparable performance with other descriptors with reduced computational complexity.
Resumo:
Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.
Resumo:
To date, automatic recognition of semantic information such as salient objects and mid-level concepts from images is a challenging task. Since real-world objects tend to exist in a context within their environment, the computer vision researchers have increasingly incorporated contextual information for improving object recognition. In this paper, we present a method to build a visual contextual ontology from salient objects descriptions for image annotation. The ontologies include not only partOf/kindOf relations, but also spatial and co-occurrence relations. A two-step image annotation algorithm is also proposed based on ontology relations and probabilistic inference. Different from most of the existing work, we specially exploit how to combine representation of ontology, contextual knowledge and probabilistic inference. The experiments show that image annotation results are improved in the LabelMe dataset.
Resumo:
Robust texture recognition in underwater image sequences for marine pest population control such as Crown-Of-Thorns Starfish (COTS) is a relatively unexplored area of research. Typically, humans count COTS by laboriously processing individual images taken during surveys. Being able to autonomously collect and process images of reef habitat and segment out the various marine biota holds the promise of allowing researchers to gain a greater understanding of the marine ecosystem and evaluate the impact of different environmental variables. This research applies and extends the use of Local Binary Patterns (LBP) as a method for texture-based identification of COTS from survey images. The performance and accuracy of the algorithms are evaluated on a image data set taken on the Great Barrier Reef.
Resumo:
A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.
Resumo:
This paper intervenes in critical discussions about the representation of homosexuality. Rejecting the ‘manifest content’ of films, it turns to cultural history to map those public discourses which close down the ways in which films can be discussed. With relation to The Adventures of Priscilla, Queen of the Desert, it examines discussions of the film in Australian newspapers (both queer and mainstream) and finds that while there is disagreement about the interpretation to be made of the film, the terms within which those interpretations can be made are quite rigid. A matrix based on similarity, difference and value provides a series of positions and a vocabulary (transgression, assimilation, positive images and stereotypes) through which to make sense of this film. The article suggests that this matrix, and the idea that similarity and difference provide a suitable axis for making sense of homosexual identity, are problematic in discussing homosexual representation.
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.
Resumo:
The ubiquity of multimodality in hypermedia environments is undeniable. Bezemer and Kress (2008) have argued that writing has been displaced by image as the central mode for representation. Given the current technical affordances of digital technology and user-friendly interfaces that enable the ease of multimodal design, the conspicuous absence of images in certain domains of cyberspace is deserving of critical analysis. In this presentation, I examine the politics of discourses implicit within hypertextual spaces, drawing textual examples from a higher education website. I critically examine the role of writing and other modes of production used in what Fairclough (1993) refers to as discourses of marketisation in higher education, tracing four pervasive discourses of teaching and learning in the current economy: i) materialization, ii) personalization, iii) technologisation, and iv) commodification (Fairclough, 1999). Each of these arguments is supported by the critical analysis of multimodal texts. The first is a podcast highlighting the new architectonic features of a university learning space. The second is a podcast and transcript of a university Open Day interview with prospective students. The third is a time-lapse video showing the construction of a new science and engineering precinct. These three multimodal texts contrast a final web-based text that exhibits a predominance of writing and the powerful absence or silencing of the image. I connect the weightiness of words and the function of monomodality in the commodification of discourses, and its resistance to the multimodal affordances of web-based technologies, and how this is used to establish particular sets of subject positions and ideologies through which readers are constrained to occupy. Applying principles of critical language study by theorists that include Fairclough, Kress, Lemke, and others whose semiotic analysis of texts focuses on the connections between language, power, and ideology, I demonstrate how the denial of image and the privileging of written words in the multimodality of cyberspace is an ideological effect to accentuate the dominance of the institution.
Resumo:
This paper presents a combined structure for using real, complex, and binary valued vectors for semantic representation. The theory, implementation, and application of this structure are all significant. For the theory underlying quantum interaction, it is important to develop a core set of mathematical operators that describe systems of information, just as core mathematical operators in quantum mechanics are used to describe the behavior of physical systems. The system described in this paper enables us to compare more traditional quantum mechanical models (which use complex state vectors), alongside more generalized quantum models that use real and binary vectors. The implementation of such a system presents fundamental computational challenges. For large and sometimes sparse datasets, the demands on time and space are different for real, complex, and binary vectors. To accommodate these demands, the Semantic Vectors package has been carefully adapted and can now switch between different number types comparatively seamlessly. This paper describes the key abstract operations in our semantic vector models, and describes the implementations for real, complex, and binary vectors. We also discuss some of the key questions that arise in the field of quantum interaction and informatics, explaining how the wide availability of modelling options for different number fields will help to investigate some of these questions.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.