20 resultados para Auditory-visual Interaction
em CentAUR: Central Archive University of Reading - UK
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
In immediate recall tasks, visual recency is substantially enhanced when output interference is low (Cowan, Saults, Elliott, & Moreno, 2002; Craik, 1969) whereas auditory recency remains high even under conditions of high output interference. Ibis auditory advantage has been interpreted in terms of auditory resistance to output interference (e.g., Neath & Surprenant, 2003). In this study the auditory-visual difference at low output interference re-emerged when ceiling effects were accounted for, but only with spoken output. With written responding the auditory advantage remained significantly larger with high than with low output interference. These new data suggest that both superior auditory encoding and modality-specific output interference contribute to the classic auditory-visual modality effect.
Resumo:
What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.
Resumo:
This paper describes experiments relating to the perception of the roughness of simulated surfaces via the haptic and visual senses. Subjects used a magnitude estimation technique to judge the roughness of “virtual gratings” presented via a PHANToM haptic interface device, and a standard visual display unit. It was shown that under haptic perception, subjects tended to perceive roughness as decreasing with increased grating period, though this relationship was not always statistically significant. Under visual exploration, the exact relationship between spatial period and perceived roughness was less well defined, though linear regressions provided a reliable approximation to individual subjects’ estimates.
Resumo:
Commercial interventions seeking to promote fruit and vegetable consumption by encouraging preschool- and school-aged children to engage with foods with ‘all their senses’ are increasing in number. We review the efficacy of such sensory interaction programmes and consider the components of these that are likely to encourage food acceptance. Repeated exposure to a food's flavour has robust empirical support in terms of its potential to increase food intake. However, children are naturally reluctant to taste new or disliked foods, and parents often struggle to provide sufficient taste opportunities for these foods to be adopted into the child's diet. We therefore explore whether prior exposure to a new food's non-taste sensory properties, such as its smell, sound, appearance or texture, might facilitate the food's introduction into the child's diet, by providing the child with an opportunity to become partially familiar with the food without invoking the distress associated with tasting it. We review the literature pertaining to the benefits associated with exposure to foods through each of the five sensory modalities in turn. We conclude by calling for further research into the potential for familiarisation with the visual, olfactory, somaesthetic and auditory properties of foods to enhance children's willingness to consume a variety of fruits and vegetables.
Resumo:
This article explores the way users of an online gay chat room negotiate the exchange of photographs and the conduct of video conferencing sessions and how this negotiation changes the way participants manage their interactions and claim and impute social identities. Different modes of communication provide users with different resources for the control of information, affecting not just what users are able to reveal, but also what they are able to conceal. Thus, the shift from a purely textual mode for interacting to one involving visual images fundamentally changes the kinds of identities and relationships available to users. At the same time, the strategies users employ to negotiate these shifts of mode can alter the resources available in different modes. The kinds of social actions made possible through different modes, it is argued, are not just a matter of the modes themselves but also of how modes are introduced into the ongoing flow of interaction.
Resumo:
Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus 'look!'. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
Recent interest in material objects - the things of everyday interaction - has led to articulations of their role in the literature on organizational knowledge and learning. What is missing is a sense of how the use of these 'things' is patterned across both industrial settings and time. This research addresses this gap with a particular emphasis on visual materials. Practices are analysed in two contrasting design settings: a capital goods manufacturer and an architectural firm. Materials are observed to be treated both as frozen, and hence unavailable for change; and as fluid, open and dynamic. In each setting temporal patterns of unfreezing and refreezing are associated with the different types of materials used. The research suggests that these differing patterns or rhythms of visual practice are important in the evolution of knowledge and in structuring social relations for delivery. Hence, to improve their performance practitioners should not only consider the types of media they use, but also reflect on the pace and style of their interactions.
Resumo:
Purpose. Some children with visual stress and/or headaches have fewer symptoms when wearing colored lenses. Although subjective reports of improved perception exist, few objective correlates of these effects have been established. Methods. In a pilot study, 10 children who wore Intuitive Colorimeter lenses, and claimed benefit, and two asymptomatic children were tested. Steady-state potentials were measured in response to low contrast patterns modulating at a frequency of 12 Hz. Four viewing conditions were compared: 1) no lens; 2) Colorimeter lens; 3) lens of complementary color; and 4) spectrally neutral lens with similar photopic transmission. Results. The asymptomatic children showed little or no difference between the lens and no lens conditions. When all the symptomatic children were tested together, a similar result was found. However, when the symptomatic children were divided into two groups depending on their symptoms, an interaction emerged. Children with visual stress but no headaches showed the largest amplitude visual evoked potential response in the no lens condition, whereas those children whose symptoms included severe headaches or migraine showed the largest amplitude visual evoked potential response when wearing their prescribed lens. Conclusions. The results suggest that it is possible to measure objective correlates of the beneficial subjective perceptual effects of colored lenses, at least in some children who have a history of migraine or severe headaches.
Resumo:
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject's right index finger-tip (visual and proprioceptive encoding, V-P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V-P passive) and actively (V-P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V-P active and V-P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.
Resumo:
Our eyes are input sensors which Provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals and call indicate emotional responses, prior to the viewer becoming aware of them. In this paper we discuss a method of tracking a user's eye movements, and Use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a, natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.
Resumo:
This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.
Resumo:
I propose a new argument showing that conscious vision sometimes depends constitutively on conscious attention. I criticise traditional arguments for this constitutive connection, on the basis that they fail adequately to dissociate evidence about visual consciousness from evidence about attention. On the same basis, I criticise Ned Block's recent counterargument that conscious vision is independent of one sort of attention (‘cognitive access'). Block appears to achieve the dissociation only because he underestimates the indeterminacy of visual consciousness. I then appeal to empirical work on the interaction between visual indeterminacy and attention, to argue for the constitutive connection.