12 resultados para Visual Cues
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Identifying a human body stimulus involves mentally rotating an embodied spatial representation of one's body (motoric embodiment) and projecting it onto the stimulus (spatial embodiment). Interactions between these two processes (spatial and motoric embodiment) may thus reveal cues about the underlying reference frames. The allocentric visual reference frame, and hence the perceived orientation of the body relative to gravity, was modulated using the York Tumbling Room, a fully furnished cubic room with strong directional cues that can be rotated around a participant's roll axis. Sixteen participants were seated upright (relative to gravity) in the Tumbling Room and made judgments about body and hand stimuli that were presented in the frontal plane at orientations of 0°, 90°, 180° (upside down), or 270° relative to them. Body stimuli have an intrinsic visual polarity relative to the environment whereas hands do not. Simultaneously the room was oriented 0°, 90°, 180° (upside down), or 270° relative to gravity resulting in sixteen combinations of orientations. Body stimuli were more accurately identified when room and body stimuli were aligned. However, such congruency did not facilitate identifying hand stimuli. We conclude that static allocentric visual cues can affect embodiment and hence performance in an egocentric mental transformation task. Reaction times to identify either hands or bodies showed no dependence on room orientation.
Resumo:
Background: Research on the evolution of reproductive isolation in African cichlid fishes has largely focussed on the role of male colours and female mate choice. Here, we tested predictions from the hypothesis that allopatric divergence in male colour is associated with corresponding divergence in preference. Methods: We studied four populations of the Lake Malawi Pseudotropheus zebra complex. We predicted that more distantly-related populations that independently evolved similar colours would interbreed freely while more closely-related populations with different colours mate assortatively. We used microsatellite genotypes or mesh false-floors to assign paternity. Fisher's exact tests as well as Binomial and Wilcoxon tests were used to detect if mating departed from random expectations. Results: Surprisingly, laboratory mate choice experiments revealed significant assortative mating not only between population pairs with differently coloured males, but between population pairs with similarly-coloured males too. This suggested that assortative mating could be based on nonvisual cues, so we further examined the sensory basis of assortative mating between two populations with different male colour. Conducting trials under monochromatic (orange) light, intended to mask the distinctive male dorsal fin hues (blue v orange) of these populations, did not significantly affect the assortative mating by female P. emmiltos observed under control conditions. By contrast, assortative mating broke down when direct contact between female and male was prevented. Conclusion: We suggest that non-visual cues, such as olfactory signals, may play an important role in mate choice and behavioural isolation in these and perhaps other African cichlid fish. Future speciation models aimed at explaining African cichlid radiations may therefore consider incorporating such mating cues in mate choice scenarios.
Resumo:
OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Resumo:
The cochlear implant (CI) is one of the most successful neural prostheses developed to date. It offers artificial hearing to individuals with profound sensorineural hearing loss and with insufficient benefit from conventional hearing aids. The first implants available some 30 years ago provided a limited sensation of sound. The benefit for users of these early systems was mostly a facilitation of lip-reading based communication rather than an understanding of speech. Considerable progress has been made since then. Modern, multichannel implant systems feature complex speech processing strategies, high stimulation rates and multiple sites of stimulation in the cochlea. Equipped with such a state-of-the-art system, the majority of recipients today can communicate orally without visual cues and can even use the telephone. The impact of CIs on deaf individuals and on the deaf community has thus been exceptional. To date, more than 300,000 patients worldwide have received CIs. In Switzerland, the first implantation was performed in 1977 and, as of 2012, over 2,000 systems have been implanted with a current rate of around 150 CIs per year. The primary purpose of this article is to provide a contemporary overview of cochlear implantation, emphasising the situation in Switzerland.
Resumo:
Objectives: In fast ball sports like beach volleyball, decision-making skills are a determining factor for excellent performance. The current investigation aimed to identify factors that influence the decisionmaking process in top-level beach volleyball defense in order to find relevant aspects for further research. For this reason, focused interviews with top players in international beach volleyball were conducted and analyzed with respect to decision-making characteristics. Design: Nineteen world-tour beach volleyball defense players, including seven Olympic or world champions, were interviewed, focusing on decision-making factors, gaze behavior, and interactions between the two. Methods: Verbal data were analyzed by inductive content analysis according to Mayring (2008). This approach allows categories to emerge from the interview material itself instead of forcing data into preset classifications and theoretical concepts. Results: The data analysis showed that, for top-level beach volleyball defense, decision making depends on opponent specifics, external context, situational context, opponent's movements, and intuition. Information on gaze patterns and visual cues revealed general tendencies indicating optimal gaze strategies that support excellent decision making. Furthermore, the analysis highlighted interactions between gaze behavior, visual information, and domain-specific knowledge. Conclusions: The present findings provide information on visual perception, domain-specific knowledge, and interactions between the two that are relevant for decision making in top-level beach volleyball defense. The results can be used to inform sports practice and to further untangle relevant mechanisms underlying decision making in complex game situations.
Resumo:
Previous research has shown that distance estimates made from memory are often asymmetric. Specifically, when A is a prominent location (a land-mark) and B is not, people tend to recall a longer distance from A to B than from B to A. Results of two experiments showed that asymmetric judgments of distance are not restricted to judgments made from memory but occur also for judgments made when all relevant visual cues are still present. Furthermore, results indicated that situational salience is sufficient to produce asymmetric judgments and that distinctiveness (such as in the case of architectural landmarks) is not necessary.
Resumo:
For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called “color-cueing method” which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behavior. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path) with a conservative control condition (anticipation-related instructions). Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behavior, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback.
Resumo:
Introduction: Although it seems plausible that sports performance relies on high-acuity foveal vision, it could be empirically shown that myoptic blur (up to +2 diopters) does not harm performance in sport tasks that require foveal information pick-up like golf putting (Bulson, Ciuffreda, & Hung, 2008). How myoptic blur affects peripheral performance is yet unknown. Attention might be less needed for processing visual cues foveally and lead to better performance because peripheral cues are better processed as a function of reduced foveal vision, which will be tested in the current experiment. Methods: 18 sport science students with self-reported myopia volunteered as participants, all of them regularly wearing contact lenses. Exclusion criteria comprised visual correction other than myopic, correction of astigmatism and use of contact lenses out of Swiss delivery area. For each of the participants, three pairs of additional contact lenses (besides their regular lenses; used in the “plano” condition) were manufactured with an individual overcorrection to a retinal defocus of +1 to +3 diopters (referred to as “+1.00 D”, “+2.00 D”, and “+3.00 D” condition, respectively). Gaze data were acquired while participants had to perform a multiple object tracking (MOT) task that required to track 4 out of 10 moving stimuli. In addition, in 66.7 % of all trials, one of the 4 targets suddenly stopped during the motion phase for a period of 0.5 s. Stimuli moved in front of a picture of a sports hall to allow for foveal processing. Due to the directional hypotheses, the level of significance for one-tailed tests on differences was set at α = .05 and posteriori effect sizes were computed as partial eta squares (ηρ2). Results: Due to problems with the gaze-data collection, 3 participants had to be excluded from further analyses. The expectation of a centroid strategy was confirmed because gaze was closer to the centroid than the target (all p < .01). In comparison to the plano baseline, participants more often recalled all 4 targets under defocus conditions, F(1,14) = 26.13, p < .01, ηρ2 = .65. The three defocus conditions differed significantly, F(2,28) = 2.56, p = .05, ηρ2 = .16, with a higher accuracy as a function of a defocus increase and significant contrasts between conditions +1.00 D and +2.00 D (p = .03) and +1.00 D and +3.00 D (p = .03). For stop trials, significant differences could neither be found between plano baseline and defocus conditions, F(1,14) = .19, p = .67, ηρ2 = .01, nor between the three defocus conditions, F(2,28) = 1.09, p = .18, ηρ2 = .07. Participants reacted faster in “4 correct+button” trials under defocus than under plano-baseline conditions, F(1,14) = 10.77, p < .01, ηρ2 = .44. The defocus conditions differed significantly, F(2,28) = 6.16, p < .01, ηρ2 = .31, with shorter response times as a function of a defocus increase and significant contrasts between +1.00 D and +2.00 D (p = .01) and +1.00 D and +3.00 D (p < .01). Discussion: The results show that gaze behaviour in MOT is not affected to a relevant degree by a visual overcorrection up to +3 diopters. Hence, it can be taken for granted that peripheral event detection was investigated in the present study. This overcorrection, however, does not harm the capability to peripherally track objects. Moreover, if an event has to be detected peripherally, neither response accuracy nor response time is negatively affected. Findings could claim considerable relevance for all sport situations in which peripheral vision is required which now needs applied studies on this topic. References: Bulson, R. C., Ciuffreda, K. J., & Hung, G. K. (2008). The effect of retinal defocus on golf putting. Ophthalmic and Physiological Optics, 28, 334-344.
Resumo:
Laying hens in loose-housing systems select a nest daily in which to lay their eggs among many identical looking nests, they often prefer corner nests. We investigated whether heterogeneity in nest curtain appearance – via colours and symbols – would influence nest selection and result in an even distribution of eggs among nests. We studied pre-laying behaviour in groups of 30 LSL hens across two consecutive trials with eight groups per trial. Half of the groups had access to six identical rollaway group-nests, while the others had access to six nests of the same type differing in outer appearance. Three colours (red, green, yellow) and three black symbols (cross, circle, rectangle) were used to create three different nest curtain designs per pen. Nest position and the side of entrance to the pens were changed at 28 and 30 weeks of age, respectively, whereby the order of changes was counterbalanced across trials. Nest positions were numbered 1–6, with nest position 1 representing the nest closest to the pen entrance. Eggs were counted per nest daily from week of age 18 to 33. Nest visits were recorded individually with an RFID system for the first 5 h of light throughout weeks 24–33. Hens with access to nests differing in curtain appearance entered fewer nests daily than hens with identical nests throughout the study but both groups entered more nests with increasing age. We found no other evidence that curtain appearance affected nest choice and hens were inconsistent in their daily nest selection. A high proportion of eggs were laid in corner nests especially during the first three weeks of lay. The number of visits per egg depended upon nest position and age: it increased with age and was higher after the nest position change than before in nest position 1, whereas it stayed stable over time in nest position 6. At 24 weeks of age, gregarious nest visits (hens visiting an occupied nest when there was at least one unoccupied nest) and solitary nest visits (hens visiting an unoccupied nest when there was at least one occupied nest) accounted for a similar amount of nest visits, however, after the door switch, gregarious nest visits made up more than half of all nest visits, while the number of solitary nest visits had decreased. The visual cues were too subtle or inadequate for hens to develop individual preferences while nest position, entrance side, age and nest occupancy affected the quantity and type of nest visits.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
Environmental cues can affect food decisions. There is growing evidence that environmental cues influence how much one consumes. This article demonstrates that environmental cues can similarly impact the healthiness of consumers’ food choices. Two field studies examined this effect with consumers of vending machine foods who were exposed to different posters. In field study 1, consumers with a health-evoking nature poster compared to a pleasure-evoking fun fair poster or no poster in their visual sight were more likely to opt for healthy snacks. Consumers were also more likely to buy healthy snacks when primed by an activity poster than when exposed to the fun fair poster. In field study 2, this consumer pattern recurred with a poster of skinny Giacometti sculptures. Overall, the results extend the mainly laboratory-based evidence by demonstrating the health-relevant impact of environmental cues on food decisions in the field. Results are discussed in light of priming literature emphasizing the relevance of preexisting associations, mental concepts and goals.