97 resultados para Object-teaching.
Resumo:
The lack of viable methods to map and label existing infrastructure is one of the engineering grand challenges for the 21st century. For instance, over two thirds of the effort needed to geometrically model even simple infrastructure is spent on manually converting a cloud of points to a 3D model. The result is that few facilities today have a complete record of as-built information and that as-built models are not produced for the vast majority of new construction and retrofit projects. This leads to rework and design changes that can cost up to 10% of the installed costs. Automatically detecting building components could address this challenge. However, existing methods for detecting building components are not view and scale-invariant, or have only been validated in restricted scenarios that require a priori knowledge without considering occlusions. This leads to their constrained applicability in complex civil infrastructure scenes. In this paper, we test a pose-invariant method of labeling existing infrastructure. This method simultaneously detects objects and estimates their poses. It takes advantage of a recent novel formulation for object detection and customizes it to generic civil infrastructure scenes. Our preliminary experiments demonstrate that this method achieves convincing recognition results.
Resumo:
We present algorithms for tracking and reasoning of local traits in the subsystem level based on the observed emergent behavior of multiple coordinated groups in potentially cluttered environments. Our proposed Bayesian inference schemes, which are primarily based on (Markov chain) Monte Carlo sequential methods, include: 1) an evolving network-based multiple object tracking algorithm that is capable of categorizing objects into groups, 2) a multiple cluster tracking algorithm for dealing with prohibitively large number of objects, and 3) a causality inference framework for identifying dominant agents based exclusively on their observed trajectories.We use these as building blocks for developing a unified tracking and behavioral reasoning paradigm. Both synthetic and realistic examples are provided for demonstrating the derived concepts. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
We present Multi Scale Shape Index (MSSI), a novel feature for 3D object recognition. Inspired by the scale space filtering theory and Shape Index measure proposed by Koenderink & Van Doorn [6], this feature associates different forms of shape, such as umbilics, saddle regions, parabolic regions to a real valued index. This association is useful for representing an object based on its constituent shape forms. We derive closed form scale space equations which computes a characteristic scale at each 3D point in a point cloud without an explicit mesh structure. This characteristic scale is then used to estimate the Shape Index. We quantitatively evaluate the robustness and repeatability of the MSSI feature for varying object scales and changing point cloud density. We also quantify the performance of MSSI for object category recognition on a publicly available dataset. © 2013 Springer-Verlag.
Resumo:
This paper addresses the basic problem of recovering the 3D surface of an object that is observed in motion by a single camera and under a static but unknown lighting condition. We propose a method to establish pixelwise correspondence between input images by way of depth search by investigating optimal subsets of intensities rather than employing all the relevant pixel values. The thrust of our algorithm is that it is capable of dealing with specularities which appear on the top of shading variance that is caused due to object motion. This is in terms of both stages of finding sparse point correspondence and dense depth search. We also propose that a linearised image basis can be directly computed by the procudure of finding the correspondence. We illustrate the performance of the theoretical propositions using images of real objects. © 2009. The copyright of this document resides with its authors.