63 resultados para Visual Object Recognition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance. © 2008 Springer Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new online multi-classifier boosting algorithm for learning object appearance models. In many cases the appearance model is multi-modal, which we capture by training and updating multiple strong classifiers. The proposed algorithm jointly learns the classifiers and a soft partitioning of the input space, defining an area of expertise for each classifier. We show how this formulation improves the specificity of the strong classifiers, allowing simultaneous location and pose estimation in a tracking task. The proposed online scheme iteratively adapts the classifiers during tracking. Experiments show that the algorithm successfully learns multi-modal appearance models during a short initial training phase, subsequently updating them for tracking an object under rapid appearance changes. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological sensing is explored through novel stable colloidal dispersions of pyrrole-benzophenone and pyrrole copolymerized silica (PPy-SiO(2)-PPyBPh) nanocomposites, which allow covalent linking of biological molecules through light mediation. The mechanism of nanocomposite attachment to a model protein is studied by gold labeled cholera toxin B (CTB) to enhance the contrast in electron microscopy imaging. The biological test itself is carried out without gold labeling, i.e., using CTB only. The protein is shown to be covalently bound through the benzophenone groups. When the reactive PPy-SiO(2)-PPyBPh-CTB nanocomposite is exposed to specific recognition anti-CTB immunoglobulins, a qualitative visual agglutination assay occurs spontaneously, producing as a positive test, PPy-SiO(2)-PPyBPh-CTB-anti-CTB, in less than 1 h, while the control solution of the PPy-SiO(2)-PPyBPh-CTB alone remained well-dispersed during the same period. These dispersions were characterized by cryogenic transmission microscopy (cryo-TEM), scanning electron microscopy (SEM), FTIR and X-ray photoelectron spectroscopy (XPS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The automated detection of structural elements (e.g., columns and beams) from visual data can be used to facilitate many construction and maintenance applications. The research in this area is under initial investigation. The existing methods solely rely on color and texture information, which makes them unable to identify each structural element if these elements connect each other and are made of the same material. The paper presents a novel method of automated concrete column detection from visual data. The method overcomes the limitation by combining columns’ boundary information with their color and texture cues. It starts from recognizing long vertical lines in an image/video frame through edge detection and Hough transform. The bounding rectangle for each pair of lines is then constructed. When the rectangle resembles the shape of a column and the color and texture contained in the pair of lines are matched with one of the concrete samples in knowledge base, a concrete column surface is assumed to be located. This way, one concrete column in images/videos is detected. The method was tested using real images/videos. The results are compared with the manual detection ones to indicate the method’s validity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. This capability is a product of the technological breakthroughs in the area of image processing that has allowed for the development of a large number of digital imaging applications in all industries. In this paper, an automated and content based construction site image retrieval method is presented. This method is based on image retrieval techniques, and specifically those related with material and object identification and matches known material samples with material clusters within the image content. The results demonstrate the suitability of this method for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. This capability is a product of the technological breakthroughs in the area of Image Processing that has allowed for the development of a large number of digital imaging applications in all industries. In this paper, an automated and content based shape recognition model is presented. This model was devised to enhance the recognition capabilities of our existing material based image retrieval model. The shape recognition model is based on clustering techniques, and specifically those related with material and object segmentation. The model detects the borders of each previously detected material depicted in the image, examines its linearity (length/width ratio) and detects its orientation (horizontal/vertical). The results emonstrate the suitability of this model for construction site image retrieval purposes and reveal the capability of existing clustering technologies to accurately identify the shape of a wealth of materials from construction site images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lack of viable methods to map and label existing infrastructure is one of the engineering grand challenges for the 21st century. For instance, over two thirds of the effort needed to geometrically model even simple infrastructure is spent on manually converting a cloud of points to a 3D model. The result is that few facilities today have a complete record of as-built information and that as-built models are not produced for the vast majority of new construction and retrofit projects. This leads to rework and design changes that can cost up to 10% of the installed costs. Automatically detecting building components could address this challenge. However, existing methods for detecting building components are not view and scale-invariant, or have only been validated in restricted scenarios that require a priori knowledge without considering occlusions. This leads to their constrained applicability in complex civil infrastructure scenes. In this paper, we test a pose-invariant method of labeling existing infrastructure. This method simultaneously detects objects and estimates their poses. It takes advantage of a recent novel formulation for object detection and customizes it to generic civil infrastructure scenes. Our preliminary experiments demonstrate that this method achieves convincing recognition results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual recognition problems often involve classification of myriads of pixels, across scales, to locate objects of interest in an image or to segment images according to object classes. The requirement for high speed and accuracy makes the problems very challenging and has motivated studies on efficient classification algorithms. A novel multi-classifier boosting algorithm is proposed to tackle the multimodal problems by simultaneously clustering samples and boosting classifiers in Section 2. The method is extended into an online version for object tracking in Section 3. Section 4 presents a tree-structured classifier, called Super tree, to further speed up the classification time of a standard boosting classifier. The proposed methods are demonstrated for object detection, tracking and segmentation tasks. © 2013 Springer-Verlag Berlin Heidelberg.