998 resultados para Visual judgment
Resumo:
After earthquakes, licensed inspectors use the established codes to assess the impact of damage on structural elements. It always takes them days to weeks. However, emergency responders (e.g. firefighters) must act within hours of a disaster event to enter damaged structures to save lives, and therefore cannot wait till an official assessment completes. This is a risk that firefighters have to take. Although Search and Rescue Organizations offer training seminars to familiarize firefighters with structural damage assessment, its effectiveness is hard to guarantee when firefighters perform life rescue and damage assessment operations together. Also, the training is not available to every firefighter. The authors therefore proposed a novel framework that can provide firefighters with a quick but crude assessment of damaged buildings through evaluating the visible damage on their critical structural elements (i.e. concrete columns in the study). This paper presents the first step of the framework. It aims to automate the detection of concrete columns from visual data. To achieve this, the typical shape of columns (long vertical lines) is recognized using edge detection and the Hough transform. The bounding rectangle for each pair of long vertical lines is then formed. When the resulting rectangle resembles a column and the material contained in the region of two long vertical lines is recognized as concrete, the region is marked as a concrete column surface. Real video/image data are used to test the method. The preliminary results indicate that concrete columns can be detected when they are not distant and have at least one surface visible.
Resumo:
As-built models have been proven useful in many project-related applications, such as progress monitoring and quality control. However, they are not widely produced in most projects because a lot of effort is still necessary to manually convert remote sensing data from photogrammetry or laser scanning to an as-built model. In order to automate the generation of as-built models, the first and fundamental step is to automatically recognize infrastructure-related elements from the remote sensing data. This paper outlines a framework for creating visual pattern recognition models that can automate the recognition of infrastructure-related elements based on their visual features. The framework starts with identifying the visual characteristics of infrastructure element types and numerically representing them using image analysis tools. The derived representations, along with their relative topology, are then used to form element visual pattern recognition (VPR) models. So far, the VPR models of four infrastructure-related elements have been created using the framework. The high recognition performance of these models validates the effectiveness of the framework in recognizing infrastructure-related elements.
Resumo:
It is commonly believed that visual short-term memory (VSTM) consists of a fixed number of "slots" in which items can be stored. An alternative theory in which memory resource is a continuous quantity distributed over all items seems to be refuted by the appearance of guessing in human responses. Here, we introduce a model in which resource is not only continuous but also variable across items and trials, causing random fluctuations in encoding precision. We tested this model against previous models using two VSTM paradigms and two feature dimensions. Our model accurately accounts for all aspects of the data, including apparent guessing, and outperforms slot models in formal model comparison. At the neural level, variability in precision might correspond to variability in neural population gain and doubly stochastic stimulus representation. Our results suggest that VSTM resource is continuous and variable rather than discrete and fixed and might explain why subjective experience of VSTM is not all or none.
Resumo:
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Resumo:
Visual information is difficult to search and interpret when the density of the displayed information is high or the layout is chaotic. Visual information that exhibits such properties is generally referred to as being "cluttered." Clutter should be avoided in information visualizations and interface design in general because it can severely degrade task performance. Although previous studies have identified computable correlates of clutter (such as local feature variance and edge density), understanding of why humans perceive some scenes as being more cluttered than others remains limited. Here, we explore an account of clutter that is inspired by findings from visual perception studies. Specifically, we test the hypothesis that the so-called "crowding" phenomenon is an important constituent of clutter. We constructed an algorithm to predict visual clutter in arbitrary images by estimating the perceptual impairment due to crowding. After verifying that this model can reproduce crowding data we tested whether it can also predict clutter. We found that its predictions correlate well with both subjective clutter assessments and search performance in cluttered scenes. These results suggest that crowding and clutter may indeed be closely related concepts and suggest avenues for further research.
On the generality of crowding: visual crowding in size, saturation, and hue compared to orientation.
Resumo:
Perception of peripherally viewed shapes is impaired when surrounded by similar shapes. This phenomenon is commonly referred to as "crowding". Although studied extensively for perception of characters (mainly letters) and, to a lesser extent, for orientation, little is known about whether and how crowding affects perception of other features. Nevertheless, current crowding models suggest that the effect should be rather general and thus not restricted to letters and orientation. Here, we report on a series of experiments investigating crowding in the following elementary feature dimensions: size, hue, and saturation. Crowding effects in these dimensions were benchmarked against those in the orientation domain. Our primary finding is that all features studied show clear signs of crowding. First, identification thresholds increase with decreasing mask spacing. Second, for all tested features, critical spacing appears to be roughly half the viewing eccentricity and independent of stimulus size, a property previously proposed as the hallmark of crowding. Interestingly, although critical spacings are highly comparable, crowding magnitude differs across features: Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for saturation and hue. We suggest that future theories and models of crowding should be able to accommodate these differences in crowding effects.
Resumo:
While searching for objects, we combine information from multiple visual modalities. Classical theories of visual search assume that features are processed independently prior to an integration stage. Based on this, one would predict that features that are equally discriminable in single feature search should remain so in conjunction search. We test this hypothesis by examining whether search accuracy in feature search predicts accuracy in conjunction search. Subjects searched for objects combining color and orientation or size; eye movements were recorded. Prior to the main experiment, we matched feature discriminability, making sure that in feature search, 70% of saccades were likely to go to the correct target stimulus. In contrast to this symmetric single feature discrimination performance, the conjunction search task showed an asymmetry in feature discrimination performance: In conjunction search, a similar percentage of saccades went to the correct color as in feature search but much less often to correct orientation or size. Therefore, accuracy in feature search is a good predictor of accuracy in conjunction search for color but not for size and orientation. We propose two explanations for the presence of such asymmetries in conjunction search: the use of conjunctively tuned channels and differential crowding effects for different features.