985 resultados para Visual Cues
Resumo:
Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.
Resumo:
Dyspnea is the major source of disability in chronic obstructive pulmonary disease (COPD). In COPD, environmental cues (e.g. the prospect of having to climb stairs) become associated with dyspnea, and may trigger dyspnea even before physical activity commences. We hypothesised that brain activation relating to such cues would be different between COPD patients and healthy controls, reflecting greater engagement of emotional mechanisms in patients. Methods: Using FMRI, we investigated brain responses to dyspnea-related word cues in 41 COPD patients and 40 healthy age-matched controls. We combined these findings with scores of self-report questionnaires thus linking the FMRI task with clinically relevant measures. This approach was adapted from studies in pain that enables identification of brain networks responsible for pain processing despite absence of a physical challenge. Results: COPD patients demonstrate activation in the medial prefrontal cortex (mPFC), and anterior cingulate cortex (ACC) which correlated with the visual analogue scale (VAS) response to word cues. This activity independently correlated with patient-reported questionnaires of depression, fatigue and dyspnea vigilance. Activation in the anterior insula, lateral prefrontal cortex (lPFC) and precuneus correlated with the VAS dyspnea scale but not the questionnaires. Conclusions: Our findings suggest that engagement of the brain's emotional circuitry is important for interpretation of dyspnea-related cues in COPD, and is influenced by depression, fatigue, and vigilance. A heightened response to salient cues is associated with increased symptom perception in chronic pain and asthma, and our findings suggest such mechanisms may be relevant in COPD.
Resumo:
Difficulty with literacy acquisition is only one of the symptoms of developmental dyslexia. Dyslexic children also show poor motor coordination and postural control. Those problems could be associated with automaticity, i.e., difficulty in performing a task without dispending a fair amount of conscious efforts. If this is the case, dyslexic children would show difficulties in using "unperceived" sensory cues to control body sway. Therefore, the aim of the study was to examine postural control performance and the coupling between visual information and body sway in dyslexic children. Ten dyslexic children and 10 non-dyslexic children stood upright inside a moving room that remained stationary or oscillated back and forward at frequencies of 0.2 or 0.5 Hz. Body sway magnitude and the relationship between the room's movement and body sway were examined. The results indicated that dyslexic children oscillated more than non-dyslexic children in both stationary and oscillating conditions. Visual manipulation induced body sway in all children but the coupling between visual information and body sway was weaker and more variable in dyslexic children. Based upon these results, we can suggest that dyslexic children use visual information to postural control with the same underlying processes as non-dyslexic children; however, dyslexic children show poorer performance and more variability while relating visual information and motor action even in a task that does not require an active cognitive and conscious motor involvement, which may be a further evidence of automaticity problem. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Females of some Thomisidae species are known to use visual and olfactory stimuli to select high quality hunting sites. However, because studies about foraging behavior in this family are concentrated on a few species, the comprehension of the process related to hunting behavior evolution in crab spiders may be biased. In this study we investigated the hunting site selection of a previously unstudied crab spider, Epicadus heterogaster. We performed three experiments to evaluate the hypothesis that subadult females are able to use visual and olfactory stimuli to select hunting sites. In the first experiment, females did not preferentially select flower paper models that matched their body coloration. However, after choosing a model that had the same body color as the spider, they remained on it for longer periods than on models with different colors. In the second experiment, females did not discriminate between flower paper models, natural flower models and crumpled paper models. Females did also not discriminate among different olfactory stimuli in the third experiment. It is possible that subadult females of E. heterogaster need to establish and experience a given hunting site before evaluating its quality. However, it remains to be investigated if they use UV cues to select a foraging area before experiencing it.
Resumo:
The body is represented in the brain at levels that incorporate multisensory information. This thesis focused on interactions between vision and cutaneous sensations (i.e., touch and pain). Experiment 1 revealed that there are partially dissociable pathways for visual enhancement of touch (VET) depending upon whether one sees one’s own body or the body of another person. This indicates that VET, a seeming low-level effect on spatial tactile acuity, is actually sensitive to body identity. Experiments 2-4 explored the effect of viewing one’s own body on pain perception. They demonstrated that viewing the body biases pain intensity judgments irrespective of actual stimulus intensity, and, more importantly, reduces the discriminative capacities of the nociceptive pathway encoding noxious stimulus intensity. The latter effect only occurs if the pain-inducing event itself is not visible, suggesting that viewing the body alone and viewing a stimulus event on the body have distinct effects on cutaneous sensations. Experiment 5 replicated an enhancement of visual remapping of touch (VRT) when viewing fearful human faces being touched, and further demonstrated that VRT does not occur for observed touch on non-human faces, even fearful ones. This suggests that the facial expressions of non-human animals may not be simulated within the somatosensory system of the human observer in the same way that the facial expressions of other humans are. Finally, Experiment 6 examined the enfacement illusion, in which synchronous visuo-tactile inputs cause another’s face to be assimilated into the mental self-face representation. The strength of enfacement was not affected by the other’s facial expression, supporting an asymmetric relationship between processing of facial identity and facial expressions. Together, these studies indicate that multisensory representations of the body in the brain link low-level perceptual processes with the perception of emotional cues and body/face identity, and interact in complex ways depending upon contextual factors.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
Many mental disorders disrupt social skills, yet few studies have examined how the brain processes social information. Functional neuroimaging, neuroconnectivity and electrophysiological studies suggest that orbital frontal cortex plays important roles in social cognition, including the analysis of information from faces, which are important cues in social interactions. Studies in humans and non-human primates show that damage to orbital frontal cortex produces social behavior impairments, including abnormal aggression, but these studies have failed to determine whether damage to this area impairs face processing. In addition, it is not known whether damage early in life is more detrimental than damage in adulthood. This study examined whether orbital frontal cortex is necessary for the discrimination of face identity and facial expressions, and for appropriate behavioral responses to aggressive (threatening) facial expressions. Rhesus monkeys (Macaca mulatta) received selective lesions of orbital frontal cortex as newborns or adults. As adults, these animals were compared with sham-operated controls on their ability to discriminate between faces of individual monkeys and between different facial expressions of emotion. A passive visual paired-comparison task with standardized rhesus monkey face stimuli was designed and used to assess discrimination. In addition, looking behavior toward aggressive expressions was assessed and compared with that of normal control animals. The results showed that lesion of orbital frontal cortex (1) may impair discrimination between faces of individual monkeys, (2) does not impair facial expression discrimination, and (3) changes the amount of time spent looking at aggressive (threatening) facial expressions depending on the context. The effects of early and late lesions did not differ. Thus, orbital frontal cortex appears to be part of the neural circuitry for recognizing individuals and for modulating the response to aggression in faces, and the plasticity of the immature brain does not allow for recovery of these functions when the damage occurs early in life. This study opens new avenues for the assessment of rhesus monkey face processing and the neural basis of social cognition, and allows a better understanding of the nature of the neuropathology in patients with mental disorders that disrupt social behavior, such as autism. ^
Resumo:
Environmental cues can affect food decisions. There is growing evidence that environmental cues influence how much one consumes. This article demonstrates that environmental cues can similarly impact the healthiness of consumers’ food choices. Two field studies examined this effect with consumers of vending machine foods who were exposed to different posters. In field study 1, consumers with a health-evoking nature poster compared to a pleasure-evoking fun fair poster or no poster in their visual sight were more likely to opt for healthy snacks. Consumers were also more likely to buy healthy snacks when primed by an activity poster than when exposed to the fun fair poster. In field study 2, this consumer pattern recurred with a poster of skinny Giacometti sculptures. Overall, the results extend the mainly laboratory-based evidence by demonstrating the health-relevant impact of environmental cues on food decisions in the field. Results are discussed in light of priming literature emphasizing the relevance of preexisting associations, mental concepts and goals.
Resumo:
The primate visual motion system performs numerous functions essential for survival in a dynamic visual world. Prominent among these functions is the ability to recover and represent the trajectories of objects in a form that facilitates behavioral responses to those movements. The first step toward this goal, which consists of detecting the displacement of retinal image features, has been studied for many years in both psychophysical and neurobiological experiments. Evidence indicates that achievement of this step is computationally straightforward and occurs at the earliest cortical stage. The second step involves the selective integration of retinal motion signals according to the object of origin. Realization of this step is computationally demanding, as the solution is formally underconstrained. It must rely--by definition--upon utilization of retinal cues that are indicative of the spatial relationships within and between objects in the visual scene. Psychophysical experiments have documented this dependence and suggested mechanisms by which it may be achieved. Neurophysiological experiments have provided evidence for a neural substrate that may underlie this selective motion signal integration. Together they paint a coherent portrait of the means by which retinal image motion gives rise to our perceptual experience of moving objects.
Resumo:
Various neuroimaging investigations have revealed that perception of emotional pictures is associated with greater visual cortex activity than their neutral counterparts. It has further been proposed that threat-related information is rapidly processed, suggesting that the modulation of visual cortex activity should occur at an early stage. Additional studies have demonstrated that oscillatory activity in the gamma band range (40-100 Hz) is associated with threat processing. Magnetoencephalography (MEG) was used to investigate such activity during perception of task-irrelevant, threat-related versus neutral facial expressions. Our results demonstrated a bilateral reduction in gamma band activity for expressions of threat, specifically anger, compared with neutral faces in extrastriate visual cortex (BA 18) within 50-250 ms of stimulus onset. These results suggest that gamma activity in visual cortex may play a role in affective modulation of visual processing, in particular with the perception of threat cues.
Resumo:
In an endeavour to provide further insight into the maturation of the cortical visual system in human infants, chromatic transient pattern reversal visual evoked potentials to red/green stimuli, were studied in a group of normal full term infants between the ages of 1 and 14 weeks post term in both cross sectional and longitudinal studies. In order to produce stimuli in which luminance cues had been eliminated with an aim to eliciting a chromatic response, preliminary studies of isoluminance determination in adults and infants were undertaken using behavioural and electrophysiological techniques. The results showed close similarity between the isoluminant ratio for adults and infants and all values were close to photometric isoluminance. Pattern reversal VEPs were recorded to stimuli of a range of red/green luminance ratios and an achromatic checkerboard. No transient VEP could be elicited with an isoluminant chromatic pattern reversal stimulus from any infant less than 7 weeks post term and similarly, all infants more than 7 weeks post term showed clear chromatic VEPs. The chromatic response first appeared at that age as a major positive component (P1) of long latency. This was delayed and reduced in comparison to the achromatic response. As the infant grew older, the latency of the P1 component decreased with the appearance of N1 and N by the 10th week post term. This finding was consistent throughout all infants assessed. In a behavioural study, no infant less than 7 weeks post term demonstrated clear discrimination of the chromatic stimulus, while those infants older than 7 weeks could do so. These findings are reviewed with respect to current neural models of visual development.
Resumo:
To navigate effectively in three-dimensional space, flying insects must approximate distances to nearby objects. Humans are able to use an array of cues to guide depth perception in the visual world. However, some of these cues are not available to insects that are constrained by their rigid eyes and relatively small body size. Flying fruit flies can use motion parallax to gauge the distance of nearby objects, but using this cue becomes a less effective strategy as objects become more remote. Humans are able to infer depth across far distances by comparing the angular distance of an object to the horizon. This study tested if flying fruit flies, like humans, use the relative position of the horizon as a depth cue. Fruit flies in tethered flight were stimulated with a virtual environment that displayed vertical bars of varying elevation relative to a horizon, and their tracking responses were recorded. This study showed that tracking responses of the flies were strongly increased by reducing the apparent elevation of the bar against the horizon, indicating that fruit flies may be able to assess the distance of far off objects in the natural world by comparing them against a visual horizon.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thirteen international netballers viewed static images of scenarios taken from netball open play. Two ‘team mates’, each marked by one opponent, could be seen in each image; each team mate-opponent pair was located on opposite sides of the vertical meridian, such that a binary response was required (‘left’ or ‘right’) from the participant, in order to select a team mate to whom they would pass the ball. For each trial, a spoken word (“left”/“right”) was presented monaurally at the onset of the visual image. Spatially invalid auditory cues (i.e., in the ear contralateral to the correct passing option), reduced performance accuracy relative to valid ones. Semantically invalid cues (e.g., a call of “left” when the target was right-located), increased response times relative to valid ones. However, there were no accompanying changes in visual attention to the team mates and their markers. The effects of auditory cues on covert attentional shifts and decision-making are discussed.