20 resultados para Bag-of-visual Words
em Cambridge University Engineering Department Publications Database
Resumo:
A significant proportion of the processing delays within the visual system are luminance dependent. Thus placing an attenuating filter over one eye causes a temporal delay between the eyes and thus an illusion of motion in depth for objects moving in the fronto-parallel plane, known as the Pulfrich effect. We have used this effect to study adaptation to such an interocular delay in two normal subjects wearing 75% attenuating neutral density filters over one eye. In two separate experimental periods both subjects showed about 60% adaptation over 9 days. Reciprocal effects were seen on removal of the filters. To isolate the site of adaptation we also measured the subjects' flicker fusion frequencies (FFFs) and contrast sensitivity functions (CSFs). Both subjects showed significant adaptation in their FFFs. An attempt to model the Pulfrich and FFF adaptation curves with a change in a single parameter in Kelly's [(1971) Journal of the Optical Society of America, 71, 537-546] retinal model was only partially successful. Although we have demonstrated adaptation in normal subjects to induced time delays in the visual system we postulate that this may at least partly represent retinal adaptation to the change in mean luminance.
Resumo:
Visual information is difficult to search and interpret when the density of the displayed information is high or the layout is chaotic. Visual information that exhibits such properties is generally referred to as being "cluttered." Clutter should be avoided in information visualizations and interface design in general because it can severely degrade task performance. Although previous studies have identified computable correlates of clutter (such as local feature variance and edge density), understanding of why humans perceive some scenes as being more cluttered than others remains limited. Here, we explore an account of clutter that is inspired by findings from visual perception studies. Specifically, we test the hypothesis that the so-called "crowding" phenomenon is an important constituent of clutter. We constructed an algorithm to predict visual clutter in arbitrary images by estimating the perceptual impairment due to crowding. After verifying that this model can reproduce crowding data we tested whether it can also predict clutter. We found that its predictions correlate well with both subjective clutter assessments and search performance in cluttered scenes. These results suggest that crowding and clutter may indeed be closely related concepts and suggest avenues for further research.
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
First responders are in danger when they perform tasks in damaged buildings after earthquakes. Structural collapse due to the failure of critical load bearing structural members (e.g. columns) during a post-earthquake event such as an aftershock can make first responders victims, considering they are unable to assess the impact of the damage inflicted in load bearing members. The writers here propose a method that can provide first responders with a crude but quick estimate of the damage inflicted in load bearing members. Under the proposed method, critical structural members (reinforced concrete columns in this study) are identified from digital visual data and the damage superimposed on these structural members is detected with the help of Visual Pattern Recognition techniques. The correlation of the two (e.g. the position, orientation and size of a crack on the surface of a column) is used to query a case-based reasoning knowledge base, which contains apriori classified states of columns according to the damage inflicted on them. When query results indicate the column's damage state is severe, the method assumes that a structural collapse is likely and first responders are warned to evacuate.