123 resultados para Eletrofisiologia visual
Resumo:
The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need to be identified and linked to the appropriate internal representations during the movement. Meanwhile, other visual information must be filtered out, which is especially demanding in visually cluttered natural environments. If selection of relevant sensory information for online control was achieved by visual attention, its limited capacity [7] would substantially constrain the efficiency of visuomotor feedback control. Here we demonstrate that both exogenously and endogenously cued attention facilitate the processing of visual target information [8], but not of visual hand information. Moreover, distracting visual information is more efficiently filtered out during the extraction of hand compared to target information. Our results therefore suggest the existence of a dedicated visuomotor binding mechanism that links the hand representation in visual and motor systems.
Resumo:
Relative (comparative) attributes are promising for thematic ranking of visual entities, which also aids in recognition tasks. However, attribute rank learning often requires a substantial amount of relational supervision, which is highly tedious, and apparently impractical for real-world applications. In this paper, we introduce the Semantic Transform, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way. Such a semantic space is found for every attribute category. To relate the classes under weak supervision, the class ordering needs to be refined according to a cost function in an iterative procedure. This problem is ideally NP-hard, and we thus propose a constrained search tree formulation for the same. Driven by the adaptive semantic feature space representation, our model achieves the best results to date for all of the tasks of relative, absolute and zero-shot classification on two popular datasets. © 2013 IEEE.
Resumo:
Experimental research in biology has uncovered a number of different ways in which flying insects use cues derived from optical flow for navigational purposes, such as safe landing, obstacle avoidance and dead reckoning. In this study, we use a synthetic methodology to gain additional insights into the navigation behavior of bees. Specifically, we focus on the mechanisms of course stabilization behavior and visually mediated odometer by using a biological model of motion detector for the purpose of long-range goal-directed navigation in 3D environment. The performance tests of the proposed navigation method are conducted by using a blimp-type flying robot platform in uncontrolled indoor environments. The result shows that the proposed mechanism can be used for goal-directed navigation. Further analysis is also conducted in order to enhance the navigation performance of autonomous aerial vehicles. © 2003 Elsevier B.V. All rights reserved.