99 resultados para computer vision, facial expression recognition, swig, red5, actionscript, ruby on rails, html5
Resumo:
This paper presents an incremental learning solution for Linear Discriminant Analysis (LDA) and its applications to object recognition problems. We apply the sufficient spanning set approximation in three steps i.e. update for the total scatter matrix, between-class scatter matrix and the projected data matrix, which leads an online solution which closely agrees with the batch solution in accuracy while significantly reducing the computational complexity. The algorithm yields an efficient solution to incremental LDA even when the number of classes as well as the set size is large. The incremental LDA method has been also shown useful for semi-supervised online learning. Label propagation is done by integrating the incremental LDA into an EM framework. The method has been demonstrated in the task of merging large datasets which were collected during MPEG standardization for face image retrieval, face authentication using the BANCA dataset, and object categorisation using the Caltech101 dataset. © 2010 Springer Science+Business Media, LLC.
Resumo:
Among several others, the on-site inspection process is mainly concerned with finding the right design and specifications information needed to inspect each newly constructed segment or element. While inspecting steel erection, for example, inspectors need to locate the right drawings for each member and the corresponding specifications sections that describe the allowable deviations in placement among others. These information seeking tasks are highly monotonous, time consuming and often erroneous, due to the high similarity of drawings and constructed elements and the abundance of information involved which can confuse the inspector. To address this problem, this paper presents the first steps of research that is investigating the requirements of an automated computer vision-based approach to automatically identify “as-built” information and use it to retrieve “as-designed” project information for field construction, inspection, and maintenance tasks. Under this approach, a visual pattern recognition model was developed that aims to allow automatic identification of construction entities and materials visible in the camera’s field of view at a given time and location, and automatic retrieval of relevant design and specifications information.
Resumo:
The US National Academy of Engineering recently identified restoring and improving urban infrastructure as one of the grand challenges of engineering. Part of this challenge stems from the lack of viable methods to map/label existing infrastructure. For computer vision, this challenge becomes “How can we automate the process of extracting geometric, object oriented models of infrastructure from visual data?” Object recognition and reconstruction methods have been successfully devised and/or adapted to answer this question for small or linear objects (e.g. columns). However, many infrastructure objects are large and/or planar without significant and distinctive features, such as walls, floor slabs, and bridge decks. How can we recognize and reconstruct them in a 3D model? In this paper, strategies for infrastructure object recognition and reconstruction are presented, to set the stage for posing the question above and discuss future research in featureless, large/planar object recognition and modeling.
Resumo:
Among several others, the on-site inspection process is mainly concerned with finding the right design and specifications information needed to inspect each newly constructed segment or element. While inspecting steel erection, for example, inspectors need to locate the right drawings for each member and the corresponding specifications sections that describe the allowable deviations in placement among others. These information seeking tasks are highly monotonous, time consuming and often erroneous, due to the high similarity of drawings and constructed elements and the abundance of information involved which can confuse the inspector. To address this problem, this paper presents the first steps of research that is investigating the requirements of an automated computer vision-based approach to automatically identify “as-built” information and use it to retrieve “as-designed” project information for field construction, inspection, and maintenance tasks. Under this approach, a visual pattern recognition model was developed that aims to allow automatic identification of construction entities and materials visible in the camera’s field of view at a given time and location, and automatic retrieval of relevant design and specifications information.
Resumo:
This paper presents the first performance evaluation of interest points on scalar volumetric data. Such data encodes 3D shape, a fundamental property of objects. The use of another such property, texture (i.e. 2D surface colouration), or appearance, for object detection, recognition and registration has been well studied; 3D shape less so. However, the increasing prevalence of 3D shape acquisition techniques and the diminishing returns to be had from appearance alone have seen a surge in 3D shape-based methods. In this work, we investigate the performance of several state of the art interest points detectors in volumetric data, in terms of repeatability, number and nature of interest points. Such methods form the first step in many shape-based applications. Our detailed comparison, with both quantitative and qualitative measures on synthetic and real 3D data, both point-based and volumetric, aids readers in selecting a method suitable for their application. © 2012 Springer Science+Business Media, LLC.
Resumo:
Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied through the time-stamp information embedded in the video streams. User-generated videos shot using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. Our first contribution is a synchronization technique which tries to establish correspondence between feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. We evaluate performance using a number of real video recordings and show that our method is able to synchronize to within 1 sec, which is significantly better than previous approaches. Our second contribution is a robust and unsupervised view-invariant activity recognition descriptor that exploits recurrence plot theory on spatial tiles. The descriptor is individually shown to better characterize the activities from different views under occlusions than state-of-the-art approaches. We combine this descriptor with our proposed synchronization method and show that it can further refine the synchronization index. © 2013 ACM.
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.