105 resultados para Visual and auditory processing
em CentAUR: Central Archive University of Reading - UK
Resumo:
ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions
Resumo:
While there has been a fair amount of research investigating children’s syntactic processing during spoken language comprehension, and a wealth of research examining adults’ syntactic processing during reading, as yet very little research has focused on syntactic processing during text reading in children. In two experiments, children and adults read sentences containing a temporary syntactic ambiguity while their eye movements were monitored. In Experiment 1, participants read sentences such as, ‘The boy poked the elephant with the long stick/trunk from outside the cage’ in which the attachment of a prepositional phrase was manipulated. In Experiment 2, participants read sentences such as, ‘I think I’ll wear the new skirt I bought tomorrow/yesterday. It’s really nice’ in which the attachment of an adverbial phrase was manipulated. Results showed that adults and children exhibited similar processing preferences, but that children were delayed relative to adults in their detection of initial syntactic misanalysis. It is concluded that children and adults have the same sentence-parsing mechanism in place, but that it operates with a slightly different time course. In addition, the data support the hypothesis that the visual processing system develops at a different rate than the linguistic processing system in children.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Resumo:
McDaniel, Robinson-Riegler, and Einstein (1998) recently reported findings in support of the proposal that prospective remembering is largely conceptually driven. In each of the three experiments they reported, however, the task in which the prospective memory target was encountered at test had a predominantly conceptual focus, thereby potentially facilitating retrieval of conceptually encoded features of the studied target event. We report two experiments in which we manipulated the dimension (perceptual or conceptual) along which a target event varied between study and test while using a processing task, at both study and test, compatible with the relevant dimension of target change. When the target was encountered in a sentence validity task at study and test, and the semantic context in which a target was encountered was changed between these two occasions, prospective remembering declined (Experiment 1). A similar decline occurred, using a readability rating task, when the perceptual context (font in which the word was printed) was altered (Experiment 2). These results indicate that both perceptual and conceptual processes can support prospective remembering.
Resumo:
Syntactic theory provides a rich array of representational assumptions about linguistic knowledge and processes. Such detailed and independently motivated constraints on grammatical knowledge ought to play a role in sentence comprehension. However most grammar-based explanations of processing difficulty in the literature have attempted to use grammatical representations and processes per se to explain processing difficulty. They did not take into account that the description of higher cognition in mind and brain encompasses two levels: on the one hand, at the macrolevel, symbolic computation is performed, and on the other hand, at the microlevel, computation is achieved through processes within a dynamical system. One critical question is therefore how linguistic theory and dynamical systems can be unified to provide an explanation for processing effects. Here, we present such a unification for a particular account to syntactic theory: namely a parser for Stabler's Minimalist Grammars, in the framework of Smolensky's Integrated Connectionist/Symbolic architectures. In simulations we demonstrate that the connectionist minimalist parser produces predictions which mirror global empirical findings from psycholinguistic research.
Resumo:
The visuo-spatial abilities of individuals with Williams syndrome (WS) have consistently been shown to be generally weak. These poor visuo-spatial abilities have been ascribed to a local processing bias by some [R. Rossen, E.S. Klima, U. Bellugi, A. Bihrle, W. Jones, Interaction between language and cognition: evidence from Williams syndrome, in: J. Beitchman, N. Cohen, M. Konstantareas, R. Tannock (Eds.), Language, Learning and Behaviour disorders: Developmental, Behavioural and Clinical Perspectives, Cambridge University Press, New York, 1996, pp. 367-392] and conversely, to a global processing bias by others [Psychol. Sci. 10 (1999) 453]. In this study, two identification versions and one drawing version of the Navon hierarchical processing task, a non-verbal task, were employed to investigate this apparent contradiction. The two identification tasks were administered to 21 individuals with WS, 21 typically developing individuals, matched by non-verbal ability, and 21 adult participants matched to the WS group by mean chronological age (CA). The third, drawing task was administered to the WS group and the typically developing (TD) controls only. It was hypothesised that the WS group would show differential processing biases depending on the type of processing the task was measuring. Results from two identification versions of the Navon task measuring divided and selective attention showed that the WS group experienced equal interference from global to local as from local to global levels, and did not show an advantage of one level over another. This pattern of performance was broadly comparable to that of the control groups. The third task, a drawing version of the Navon task, revealed that individuals with WS were significantly better at drawing the local form in comparison to the global figure, whereas the typically developing control group did not show a bias towards either level. In summary, this study demonstrates that individuals with WS do not have a local or a global processing bias when asked to identify stimuli, but do show a local bias in their drawing abilities. This contrast may explain the apparently contrasting findings from previous studies. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Asynchronous Optical Sampling has the potential to improve signal to noise ratio in THz transient sperctrometry. The design of an inexpensive control scheme for synchronising two femtosecond pulse frequency comb generators at an offset frequency of 20 kHz is discussed. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing recorded THz transients in the time and frequency domain are outlined. Finally, possibilities for femtosecond pulse shaping using genetic algorithms are mentioned.