11 resultados para Visual identification tasks
em Cambridge University Engineering Department Publications Database
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.
Resumo:
The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. Examples include surface cracks detection, assessment of fire-damaged mortar, fatigue evaluation of asphalt mixes, aggregate shape measurements, velocimentry, vehicles detection, pore size distribution in geotextiles, damage detection and others. This capability is a product of the technological breakthroughs in the area of Image and Video Processing that has allowed for the development of a large number of digital imaging applications in all industries ranging from the well established medical diagnostic tools (magnetic resonance imaging, spectroscopy and nuclear medical imaging) to image searching mechanisms (image matching, content based image retrieval). Content based image retrieval techniques can also assist in the automated recognition of materials in construction site images and thus enable the development of reliable methods for image classification and retrieval. The amount of original imaging information produced yearly in the construction industry during the last decade has experienced a tremendous growth. Digital cameras and image databases are gradually replacing traditional photography while owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks. However, construction companies tend to store images without following any standardized indexing protocols, thus making the manual searching and retrieval a tedious and time-consuming effort. Alternatively, material and object identification techniques can be used for the development of automated, content based, construction site image retrieval methodology. These methods can utilize automatic material or object based indexing to remove the user from the time-consuming and tedious manual classification process. In this paper, a novel material identification methodology is presented. This method utilizes content based image retrieval concepts to match known material samples with material clusters within the image content. The results demonstrate the suitability of this methodology for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.
Resumo:
Among several others, the on-site inspection process is mainly concerned with finding the right design and specifications information needed to inspect each newly constructed segment or element. While inspecting steel erection, for example, inspectors need to locate the right drawings for each member and the corresponding specifications sections that describe the allowable deviations in placement among others. These information seeking tasks are highly monotonous, time consuming and often erroneous, due to the high similarity of drawings and constructed elements and the abundance of information involved which can confuse the inspector. To address this problem, this paper presents the first steps of research that is investigating the requirements of an automated computer vision-based approach to automatically identify “as-built” information and use it to retrieve “as-designed” project information for field construction, inspection, and maintenance tasks. Under this approach, a visual pattern recognition model was developed that aims to allow automatic identification of construction entities and materials visible in the camera’s field of view at a given time and location, and automatic retrieval of relevant design and specifications information.
Resumo:
First responders are in danger when they perform tasks in damaged buildings after earthquakes. Structural collapse due to the failure of critical load bearing structural members (e.g. columns) during a post-earthquake event such as an aftershock can make first responders victims, considering they are unable to assess the impact of the damage inflicted in load bearing members. The writers here propose a method that can provide first responders with a crude but quick estimate of the damage inflicted in load bearing members. Under the proposed method, critical structural members (reinforced concrete columns in this study) are identified from digital visual data and the damage superimposed on these structural members is detected with the help of Visual Pattern Recognition techniques. The correlation of the two (e.g. the position, orientation and size of a crack on the surface of a column) is used to query a case-based reasoning knowledge base, which contains apriori classified states of columns according to the damage inflicted on them. When query results indicate the column's damage state is severe, the method assumes that a structural collapse is likely and first responders are warned to evacuate.
Resumo:
Among several others, the on-site inspection process is mainly concerned with finding the right design and specifications information needed to inspect each newly constructed segment or element. While inspecting steel erection, for example, inspectors need to locate the right drawings for each member and the corresponding specifications sections that describe the allowable deviations in placement among others. These information seeking tasks are highly monotonous, time consuming and often erroneous, due to the high similarity of drawings and constructed elements and the abundance of information involved which can confuse the inspector. To address this problem, this paper presents the first steps of research that is investigating the requirements of an automated computer vision-based approach to automatically identify “as-built” information and use it to retrieve “as-designed” project information for field construction, inspection, and maintenance tasks. Under this approach, a visual pattern recognition model was developed that aims to allow automatic identification of construction entities and materials visible in the camera’s field of view at a given time and location, and automatic retrieval of relevant design and specifications information.
Resumo:
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
On the generality of crowding: visual crowding in size, saturation, and hue compared to orientation.
Resumo:
Perception of peripherally viewed shapes is impaired when surrounded by similar shapes. This phenomenon is commonly referred to as "crowding". Although studied extensively for perception of characters (mainly letters) and, to a lesser extent, for orientation, little is known about whether and how crowding affects perception of other features. Nevertheless, current crowding models suggest that the effect should be rather general and thus not restricted to letters and orientation. Here, we report on a series of experiments investigating crowding in the following elementary feature dimensions: size, hue, and saturation. Crowding effects in these dimensions were benchmarked against those in the orientation domain. Our primary finding is that all features studied show clear signs of crowding. First, identification thresholds increase with decreasing mask spacing. Second, for all tested features, critical spacing appears to be roughly half the viewing eccentricity and independent of stimulus size, a property previously proposed as the hallmark of crowding. Interestingly, although critical spacings are highly comparable, crowding magnitude differs across features: Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for saturation and hue. We suggest that future theories and models of crowding should be able to accommodate these differences in crowding effects.
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.