946 resultados para Visual information
Resumo:
The goal of this study was to investigate the effects of manipulation of the characteristics of visual stimulus on postural control in dyslexic children. A total of 18 dyslexic and 18 non-dyslexic children stood upright inside a moving room, as still as possible, and looked at a target at different conditions of distance between the participant and a moving room frontal wall (25-150 cm) and vision (full and central). The first trial was performed without vision (baseline). Then four trials were performed in which the room remained stationary and eight trials with the room moving, lasting 60 s each. Mean sway amplitude, coherence, relative phase, and angular deviation were calculated. The results revealed that dyslexic children swayed with larger magnitude in both stationary and moving conditions. When the room remained stationary, all children showed larger body sway magnitude at 150 cm distance. Dyslexic children showed larger body sway magnitude in central compared to full vision condition. In the moving condition, body sway magnitude was similar between dyslexic and non-dyslexic children but the coupling between visual information and body sway was weaker in dyslexic children. Moreover, in the absence of peripheral visual cues, induced body sway in dyslexic children was temporally delayed regarding visual stimulus. Taken together, these results indicate that poor postural control performance in dyslexic children is related to how sensory information is acquired from the environment and used to produce postural responses. In conditions in which sensory cues are less informative, dyslexic children take longer to process sensory stimuli in order to obtain precise information, which leads to performance deterioration. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Braking visual control was studied in recreational cyclists through the manipulation of bicycle’s velocity at braking initiation (low, medium, and high) and approaching trajectory (straight and curved) with respect to a stationary obstacle. The hypothesis was that the type of trajectory, exclusively or interacting with initial velocity, would affect time to collision visual information (tau margin) and its fi rst derivative in time (tau-dot), respectively, in the onset and during braking. The results revealed that velocity affected signifi cantly tau margin while tau-dot remained unaltered independently of condition. The type of trajectory clearly did not affect the visual control of braking in cyclists.
Resumo:
Currently, too much visual information present in all media is form vehemently, for example, in print media and interfaces used for publicity in conjunction with informational design. This visual information has great influence in the lives of human beings, since the vision of these individuals is the sense most used. Studies on visual identity have not explored this issue in a satisfactory manner, favoring thus the subject of this little development projects in the area. Note the need for analyzes to enable implementation principles of projective, making them accessible to the understanding of most individuals. This study aimed to propose an evaluation of visual identities, which were analyzed by means of visual concepts of usability, design methodologies and Gestalt. We contacted design firms specializing in visual identity projects, places where interviews were conducted to collect the marks allowed for analysis. The results point to a frequent demand the employment of visual usability principles, design methodologies and design in Gestalt visual identities.
Resumo:
Introduction Current empirical findings indicate that the efficiency of decision making (both for experts and near-experts) in simple situations is reduced under increased stress (Wilson, 2008). Explaining the phenomenon, the Attentional Control Theory (ACT, Eysenck et al., 2007) postulates an impairment of attentional processes resulting in a less efficient processing of visual information. From a practitioner’s perspective, it would be highly relevant to know whether this phenomenon can also be found in complex sport situations like in the game of football. Consequently, in the present study, decision making of football players was examined under regular vs. increased anxiety conditions. Methods 22 participants (11 experts and 11 near-experts) viewed 24 complex football situations (counterbalanced) in two anxiety conditions from the perspective of the last defender. They had to decide as fast and accurate as possible on the next action of the player in possession (options: shot on goal, dribble or pass to a designated team member) for equal numbers of trials in a near and far distance condition (based on the position of the player in possession). Anxiety was manipulated via a competitive environment, false feedback as well as ego threats. Decision time and accuracy, gaze behaviour (e.g., fixation duration on different locations) as well as state anxiety and mental effort were used as dependent variables and analysed with 2 (expertise) x 2 (distance) x 2 (anxiety) ANOVAs with repeated measures on the last two factors. Besides expertise differences, it was hypothesised that, based on ACT, increased anxiety reduces performance efficiency and impairs gaze behaviour. Results and Discussion Anxiety was manipulated successfully, indicated by higher ratings of state anxiety, F(1, 20) = 13.13, p < .01, ηp2 = .40. Besides expertise differences in decision making – experts responded faster, F(1, 20) = 11.32, p < .01, ηp2 = .36, and more accurate, F(1,20) = 23.93, p < .01, ηp2 = .55, than near-experts – decision time, F(1, 20) = 9.29, p < .01, ηp2 = .32, and mental effort, F(1, 20) = 7.33, p = .01, ηp2 = .27, increased for both groups in the high anxiety condition. This result confirms the ACT assumption that processing efficiency is reduced when being anxious. Replicating earlier findings, a significant expertise by distance interaction could be observed, F(1, 18) = 18.53, p < .01, ηp2 = .51), with experts fixating longer on the player in possession or the ball in the near distance and longer on other opponents, teammates and free space in the far distance condition. This shows that experts are able to adjust their gaze behaviour to affordances of displayed playing patterns. Additionally, a three way interaction was found, F(1, 18) = 7.37 p = .01, ηp2 = .29, revealing that experts utilised a reduced number of fixations in the far distance condition when being anxious indicating a reduced ability to pick up visual information. Since especially the visual search behaviour of experts was impaired, the ACT prediction that particularly top-down processes are affected by anxiety could be confirmed. Taken together, the results show that sports performance is negatively influenced by anxiety since longer response times, higher mental effort and inefficient visual search behaviour were observed. From a practitioner’s perspective, this finding might suggest preferring (implicit) perceptual cognitive training; however, this recommendation needs to be empirically supported in intervention studies. References: Eysenck, M. W., Derakshan, N., Santos, R., & Calvo, M. G. (2007). Anxiety and cognitive performance: Attentional control theory. Emotion, 7, 336-353. Wilson, M. (2008). From processing efficiency to attentional control: A mechanistic account of the anxiety-performance relationship. Int. Review of Sport and Exercise Psychology, 1, 184-201.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
This article describes a new visual servo control and strategies that are used to carry out dynamic tasks by the Robotenis platform. This platform is basically a parallel robot that is equipped with an acquisition and processing system of visual information, its main feature is that it has a completely open architecture control, and planned in order to design, implement, test and compare control strategies and algorithms (visual and actuated joint controllers). Following sections describe a new visual control strategy specially designed to track and intercept objects in 3D space. The results are compared with a controller shown in previous woks, where the end effector of the robot keeps a constant distance from the tracked object. In this work, the controller is specially designed in order to allow changes in the tracking reference. Changes in the tracking reference can be used to grip an object that is under movement, or as in this case, hitting a hanging Ping-Pong ball. Lyapunov stability is taken into account in the controller design.
Resumo:
The goal of the work described in this paper is to develop a visual line guided system for being used on-board an Autonomous Guided Vehicle (AGV) commercial car, controlling the steering and using just the visual information of a line painted below the car. In order to implement the control of the vehicle, a Fuzzy Logic controller has been implemented, that has to be robust against curvature changes and velocity changes. The only input information for the controller is the visual distance from the image center captured by a camera pointing downwards to the guiding line on the road, at a commercial frequency of 30Hz. The good performance of the controller has successfully been demonstrated in a real environment at urban velocities. The presented results demonstrate the capability of the Fuzzy controller to follow a circuit in urban environments without previous information about the path or any other information from additional sensors
Resumo:
This article presents a visual servoing system to follow a 3D moving object by a Micro Unmanned Aerial Vehicle (MUAV). The presented control strategy is based only on the visual information given by an adaptive tracking method based on the colour information. A visual fuzzy system has been developed for servoing the camera situated on a rotary wing MAUV, that also considers its own dynamics. This system is focused on continuously following of an aerial moving target object, maintaining it with a fixed safe distance and centred on the image plane. The algorithm is validated on real flights on outdoors scenarios, showing the robustness of the proposed systems against winds perturbations, illumination and weather changes among others. The obtained results indicate that the proposed algorithms is suitable for complex controls task, such object following and pursuit, flying in formation, as well as their use for indoor navigation
Resumo:
In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter¿s position and using the extracted information to control the UAV.
Resumo:
El principal objetivo de esta tesis es dotar a los vehículos aéreos no tripulados (UAVs, por sus siglas en inglés) de una fuente de información adicional basada en visión. Esta fuente de información proviene de cámaras ubicadas a bordo de los vehículos o en el suelo. Con ella se busca que los UAVs realicen tareas de aterrizaje o inspección guiados por visión, especialmente en aquellas situaciones en las que no haya disponibilidad de estimar la posición del vehículo con base en GPS, cuando las estimaciones de GPS no tengan la suficiente precisión requerida por las tareas a realizar, o cuando restricciones de carga de pago impidan añadir sensores a bordo de los vehículos. Esta tesis trata con tres de las principales áreas de la visión por computador: seguimiento visual y estimación visual de la pose (posición y orientación), que a su vez constituyen la base de la tercera, denominada control servo visual, que en nuestra aplicación se enfoca en el empleo de información visual para controlar los UAVs. Al respecto, esta tesis se ocupa de presentar propuestas novedosas que permitan solucionar problemas relativos al seguimiento de objetos mediante cámaras ubicadas a bordo de los UAVs, se ocupa de la estimación de la pose de los UAVs basada en información visual obtenida por cámaras ubicadas en el suelo o a bordo, y también se ocupa de la aplicación de las técnicas propuestas para solucionar diferentes problemas, como aquellos concernientes al seguimiento visual para tareas de reabastecimiento autónomo en vuelo o al aterrizaje basado en visión, entre otros. Las diversas técnicas de visión por computador presentadas en esta tesis se proponen con el fin de solucionar dificultades que suelen presentarse cuando se realizan tareas basadas en visión con UAVs, como las relativas a la obtención, en tiempo real, de estimaciones robustas, o como problemas generados por vibraciones. Los algoritmos propuestos en esta tesis han sido probados con información de imágenes reales obtenidas realizando pruebas on-line y off-line. Diversos mecanismos de evaluación han sido empleados con el propósito de analizar el desempeño de los algoritmos propuestos, entre los que se incluyen datos simulados, imágenes de vuelos reales, estimaciones precisas de posición empleando el sistema VICON y comparaciones con algoritmos del estado del arte. Los resultados obtenidos indican que los algoritmos de visión por computador propuestos tienen un desempeño que es comparable e incluso mejor al de algoritmos que se encuentran en el estado del arte. Los algoritmos propuestos permiten la obtención de estimaciones robustas en tiempo real, lo cual permite su uso en tareas de control visual. El desempeño de estos algoritmos es apropiado para las exigencias de las distintas aplicaciones examinadas: reabastecimiento autónomo en vuelo, aterrizaje y estimación del estado del UAV. Abstract The main objective of this thesis is to provide Unmanned Aerial Vehicles (UAVs) with an additional vision-based source of information extracted by cameras located either on-board or on the ground, in order to allow UAVs to develop visually guided tasks, such as landing or inspection, especially in situations where GPS information is not available, where GPS-based position estimation is not accurate enough for the task to develop, or where payload restrictions do not allow the incorporation of additional sensors on-board. This thesis covers three of the main computer vision areas: visual tracking and visual pose estimation, which are the bases the third one called visual servoing, which, in this work, focuses on using visual information to control UAVs. In this sense, the thesis focuses on presenting novel solutions for solving the tracking problem of objects when using cameras on-board UAVs, on estimating the pose of the UAVs based on the visual information collected by cameras located either on the ground or on-board, and also focuses on applying these proposed techniques for solving different problems, such as visual tracking for aerial refuelling or vision-based landing, among others. The different computer vision techniques presented in this thesis are proposed to solve some of the frequently problems found when addressing vision-based tasks in UAVs, such as obtaining robust vision-based estimations at real-time frame rates, and problems caused by vibrations, or 3D motion. All the proposed algorithms have been tested with real-image data in on-line and off-line tests. Different evaluation mechanisms have been used to analyze the performance of the proposed algorithms, such as simulated data, images from real-flight tests, publicly available datasets, manually generated ground truth data, accurate position estimations using a VICON system and a robotic cell, and comparison with state of the art algorithms. Results show that the proposed computer vision algorithms obtain performances that are comparable to, or even better than, state of the art algorithms, obtaining robust estimations at real-time frame rates. This proves that the proposed techniques are fast enough for vision-based control tasks. Therefore, the performance of the proposed vision algorithms has shown to be of a standard appropriate to the different explored applications: aerial refuelling and landing, and state estimation. It is noteworthy that they have low computational overheads for vision systems.
Resumo:
In optimal foraging theory, search time is a key variable defining the value of a prey type. But the sensory-perceptual processes that constrain the search for food have rarely been considered. Here we evaluate the flight behavior of bumblebees (Bombus terrestris) searching for artificial flowers of various sizes and colors. When flowers were large, search times correlated well with the color contrast of the targets with their green foliage-type background, as predicted by a model of color opponent coding using inputs from the bees' UV, blue, and green receptors. Targets that made poor color contrast with their backdrop, such as white, UV-reflecting ones, or red flowers, took longest to detect, even though brightness contrast with the background was pronounced. When searching for small targets, bees changed their strategy in several ways. They flew significantly slower and closer to the ground, so increasing the minimum detectable area subtended by an object on the ground. In addition, they used a different neuronal channel for flower detection. Instead of color contrast, they used only the green receptor signal for detection. We relate these findings to temporal and spatial limitations of different neuronal channels involved in stimulus detection and recognition. Thus, foraging speed may not be limited only by factors such as prey density, flight energetics, and scramble competition. Our results show that understanding the behavioral ecology of foraging can substantially gain from knowledge about mechanisms of visual information processing.
Resumo:
Visual information in primates is relayed from the dorsal lateral geniculate nucleus to the cerebral cortex by three parallel neuronal channels designated the parvocellular, magnocellular, and interlaminar pathways. Here we report that m2 muscarinic acetylcholine receptor in the macaque monkey visual cortex is selectively associated with synaptic circuits subserving the function of only one of these channels. The m2 receptor protein is enriched both in layer IV axons originating from parvocellular layers of the dorsal lateral geniculate nucleus and in cytochrome oxidase poor interblob compartments in layers II and III, which are linked with the parvocellular pathway. In these compartments, m2 receptors appear to be heteroreceptors, i.e., they are associated predominantly with asymmetric, noncholinergic synapses, suggesting a selective role in the modulation of excitatory neurotransmission through the parvocellular visual channel.
Resumo:
The visual stimuli that elicit neural activity differ for different retinal ganglion cells and these cells have been categorized by the visual information that they transmit. If specific visual information is conveyed exclusively or primarily by a particular set of ganglion cells, one might expect the cells to be organized spatially so that their sampling of information from the visual field is complete but not redundant. In other words, the laterally spreading dendrites of the ganglion cells should completely cover the retinal plane without gaps or significant overlap. The first evidence for this sort of arrangement, which has been called a tiling or tessellation, was for the two types of "alpha" ganglion cells in cat retina. Other reports of tiling by ganglion cells have been made subsequently. We have found evidence of a particularly rigorous tiling for the four types of ganglion cells in rabbit retina that convey information about the direction of retinal image motion (the ON-OFF direction-selective cells). Although individual cells in the four groups are morphologically indistinguishable, they are organized as four overlaid tilings, each tiling consisting of like-type cells that respond preferentially to a particular direction of retinal image motion. These observations lend support to the hypothesis that tiling is a general feature of the organization of information outflow from the retina and clearly implicate mechanisms for recognition of like-type cells and establishment of mutually acceptable territories during retinal development.
Resumo:
A imagem mental e a memória visual têm sido consideradas como componentes distintos na codificação da informação, e associados a processos diferentes da memória de trabalho. Evidências experimentais mostram, por exemplo, que o desempenho em tarefas de memória baseadas na geração de imagem mentais (imaginação visual) sofre a interferência do ruído visual dinâmico (RVD), mas não se observa o mesmo efeito em tarefas de memória visual baseadas na percepção visual (memória visual). Embora várias evidências mostrem que tarefas de imaginação e de memória visual sejam baseadas em processos cognitivos diferentes, isso não descarta a possibilidade de utilizarem também processos em comum e que alguns resultados experimentais que apontam diferenças entre as duas tarefas resultem de diferenças metodológicas entre os paradigmas utilizados para estuda-las. Nosso objetivo foi equiparar as tarefas de imagem mental visual e memória visual por meio de tarefas de reconhecimento, com o paradigma de dicas retroativas espaciais. Sequências de letras romanas na forma visual (tarefa de memória visual) e acústicas (tarefa de imagem mental visual) foram apresentadas em quatro localizações espaciais diferentes. No primeiro e segundo experimento analisou-se o tempo do curso de recuperação tanto para o processo de imagem quanto para o processo de memória. No terceiro experimento, comparou-se a estrutura das representações dos dois componentes, por meio da apresentação do RVD durante a etapa de geração e recuperação. Nossos resultados mostram que não há diferenças no armazenamento da informação visual durante o período proposto, porém o RVD afeta a eficiência do processo de recuperação, isto é o tempo de resposta, sendo a representação da imagem mental visual mais suscetível ao ruído. No entanto, o processo temporal da recuperação é diferente para os dois componentes, principalmente para imaginação que requer mais tempo para recuperar a informação do que a memória. Os dados corroboram a relevância do paradigma de dicas retroativas que indica que a atenção espacial é requisitada em representações de organização espacial, independente se são visualizadas ou imaginadas.