901 resultados para Visual stimulus generation
Resumo:
Interactions between stimulus-induced oscillations (35-80 Hz) and stimulus-locked nonoscillatory responses were investigated in the visual cortex areas 17 and 18 of anaesthetized cats. A single square-wave luminance grating was used as a visual stimulus during simultaneous recordings from up to seven electrodes. The stimulus movement consisted of a superposition of a smooth movement with a sequence of dynamically changing accelerations. Responses of local groups of neurons at each electrode were studied on the basis of multiple unit activity and local slow field potentials (13-120 Hz). Oscillatory and stimulus-locked components were extracted from multiple unit activity and local slow field potentials and quantified by a combination of temporal and spectral correlation methods. We found fast stimulus-locked components primarily evoked by sudden stimulus accelerations, whereas oscillatory components (35-80 Hz) were induced during slow smooth movements. Oscillations were gradually reduced in amplitude and finally fully suppressed with increasing amplitudes of fast stimulus-locked components. It is argued that suppression of oscillations is necessary to prevent confusion during sequential processing of stationary and fast changing retinal images.
Resumo:
Functional roles of the cortical backward signal in long-term memory formation were studied in monkeys performing a visual pair-association task. Before the monkeys learned the task, the anterior commissure was transected, disconnecting the anterior temporal cortex of each hemisphere. After training with 12 pairs of pictures, single units were recorded from the inferotemporal cortex of the monkeys as the control. By injecting a grid of ibotenic acid, we unilaterally lesioned the entorhinal and perirhinal cortex, which provides massive direct and indirect backward projections ipsilaterally to the inferotemporal cortex. After the lesion, the monkeys fixated the cue stimulus normally, relearned the preoperatively learned set (set A), and learned a new set (set B) of paired associates. Then, single units were recorded from the same area as for the prelesion control. We found that (i) in spite of the lesion, the sampled neurons responded strongly and selectively to both the set A and set B patterns and (ii) the paired associates elicited significantly correlated responses in the control neurons before the lesion but not in the cells tested after the lesion, either for set A or set B stimuli. We conclude that the ability of inferotemporal neurons to represent association between picture pairs was lost after the lesion of entorhinal and perirhinal cortex, most likely through disruption of backward neural signals to the inferotemporal neurons, while the ability of the neurons to respond to a particular visual stimulus was left intact.
Resumo:
The role of intrinsic cortical connections in processing sensory input and in generating behavioral output is poorly understood. We have examined this issue in the context of the tuning of neuronal responses in cortex to the orientation of a visual stimulus. We analytically study a simple network model that incorporates both orientation-selective input from the lateral geniculate nucleus and orientation-specific cortical interactions. Depending on the model parameters, the network exhibits orientation selectivity that originates from within the cortex, by a symmetry-breaking mechanism. In this case, the width of the orientation tuning can be sharp even if the lateral geniculate nucleus inputs are only weakly anisotropic. By using our model, several experimental consequences of this cortical mechanism of orientation tuning are derived. The tuning width is relatively independent of the contrast and angular anisotropy of the visual stimulus. The transient population response to changing of the stimulus orientation exhibits a slow "virtual rotation." Neuronal cross-correlations exhibit long time tails, the sign of which depends on the preferred orientations of the cells and the stimulus orientation.
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Neuronal operations associated with the top-down control process of shifting attention from one locus to another involve a network of cortical regions, and their influence is deemed fundamental to visual perception. However, the extent and nature of these operations within primary visual areas are unknown. In this paper, we used magnetoencephalography (MEG) in combination with magnetic resonance imaging (MRI) to determine whether, prior to the onset of a visual stimulus, neuronal activity within early visual cortex is affected by covert attentional shifts. Time/frequency analyses were used to identify the nature of this activity. Our results show that shifting attention towards an expected visual target results in a late-onset (600 ms postcue onset) depression of alpha activity which persists until the appearance of the target. Independent component analysis (ICA) and dipolar source modeling confirmed that the neuronal changes we observed originated from within the calcarine cortex. Our results further show that the amplitude changes in alpha activity were induced not evoked (i.e., not phase-locked to the cued attentional task). We argue that the decrease in alpha prior to the onset of the target may serve to prime the early visual cortex for incoming sensory information. We conclude that attentional shifts affect activity within the human calcarine cortex by altering the amplitude of spontaneous alpha rhythms and that subsequent modulation of visual input with attentional engagement follows as a consequence of these localized changes in oscillatory activity. © 2005 Elsevier B.V. All rights reserved.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
The degree to which a person relies on visual stimuli for spatial orientation is termed visual dependency (VD). VD is considered a perceptual trait or cognitive style influenced by psychological factors and mediated by central re-weighting of the sensory inputs involved in spatial orientation. VD is often measured using the rod-and-disk test, wherein participants align a central rod to the subjective visual vertical (SVV) in the presence of a background that is either stationary or rotating around the line of sight - dynamic SVV. Although this task has been employed to assess VD in health and vestibular disease, it is unknown what effect torsional nystagmic eye movements may have on individual performance. Using caloric ear irrigation, 3D video-oculography and the rod-and-disk test, we show that caloric torsional nystagmus modulates measures of visual dependency and demonstrate that increases in tilt after irrigation are positively correlated with changes in ocular torsional eye movements. When the direction of the slow phase of the torsional eye movement induced by the caloric is congruent with that induced by the rotating visual stimulus, there is a significant increase in tilt. When these two torsional components are in opposition there is a decrease. These findings show that measures of visual dependence can be influenced by oculomotor responses induced by caloric stimulation. The findings are of significance for clinical studies as they indicate that VD, which often increases in vestibular disorders, is not only modulated by changes in cognitive style but also by eye movements, in particular nystagmus.
Resumo:
Retinal image properties such as contrast and spatial frequency play important roles in the development of normal vision. For example, visual environments comprised solely of low contrast and/or low spatial frequencies induce myopia. The visual image is processed by the retina and it then locally controls eye growth. In terms of the retinal neurotransmitters that link visual stimuli to eye growth, there is strong evidence to suggest involvement of the retinal dopamine (DA) system. For example, effectively increasing retinal DA levels by using DA agonists can suppress the development of form-deprivation myopia (FDM). However, whether visual feedback controls eye growth by modulating retinal DA release, and/or some other factors, is still being elucidated. This thesis is chiefly concerned with the relationship between the dopaminergic system and retinal image properties in eye growth control. More specifically, whether the amount of retinal DA release reduces as the complexity of the image degrades was determined. For example, we investigated whether the level of retinal DA release decreased as image contrast decreased. In addition, the effects of spatial frequency, spatial energy distribution slope, and spatial phase on retinal DA release and eye growth were examined. When chicks were 8-days-old, a cone-lens imaging system was applied monocularly (+30 D, 3.3 cm cone). A short-term treatment period (6 hr) and a longer-term treatment period (4.5 days) were used. The short-term treatment tests for the acute reduction in DA release by the visual stimulus, as is seen with diffusers and lenses, whereas the 4.5 day point tests for reduction in DA release after more prolonged exposure to the visual stimulus. In the contrast study, 1.35 cyc/deg square wave grating targets of 95%, 67%, 45%, 12% or 4.2% contrast were used. Blank (0% contrast) targets were included for comparison. In the spatial frequency study, both sine and square wave grating targets with either 0.017 cyc/deg and 0.13 cyc/deg fundamental spatial frequencies and 95% contrast were used. In the spectral slope study, 30% root-mean-squared (RMS) contrast fractal noise targets with spectral fall-off of 1/f0.5, 1/f and 1/f2 were used. In the spatial alignment study, a structured Maltese cross (MX) target, a structured circular patterned (C) target and the scrambled versions of these two targets (SMX and SC) were used. Each treatment group comprised 6 chicks for ocular biometry (refraction and ocular dimension measurement) and 4 for analysis of retinal DA release. Vitreal dihydroxyphenylacetic acid (DOPAC) was analysed through ion-paired reversed phase high performance liquid chromatography with electrochemical detection (HPLC-ED), as a measure of retinal DA release. For the comparison between retinal DA release and eye growth, large reductions in retinal DA release possibly due to the decreased light level inside the cone-lens imaging system were observed across all treated eyes while only those exposed to low contrast, low spatial frequency sine wave grating, 1/f2, C and SC targets had myopic shifts in refraction. Amongst these treatment groups, no acute effect was observed and longer-term effects were only found in the low contrast and 1/f2 groups. These findings suggest that retinal DA release does not causally link visual stimuli properties to eye growth, and these target induced changes in refractive development are not dependent on the level of retinal DA release. Retinal dopaminergic cells might be affected indirectly via other retinal cells that immediately respond to changes in the image contrast of the retinal image.
Resumo:
Aim Our pedagogical research addressed the following research questions: 1) Can shared ‘cyber spaces’, such as a ‘wiki’, be occupied by undergraduate women’s health students to improve their critical thinking skills? 2) What are the learning processes via which this occurs? 3) What are the implications of this assessment trial for achieving learning objectives and outcomes in future public health undergraduate courses? Methods The students contributed written, critical reflections (approximately 250 words) to the Wiki each week following the lecture. Students reflected on a range of topics including the portrayal of women in the media, femininity, gender inequality, child bearing and rearing, domestic violence, mental health, Indigenous women, older women, and LGBTIQ communities. Their entries were anonymous, but visible to their peers. Each wiki entry contained a ‘discussion tab’ wherein online conversations were initiated. We used a social constructivist approach to grounded theory to analyse the 480 entries posted over the semester. (http://pub336womenshealth.wikispaces.com/) Results The social constructivist approach initiated by Vygotsky (1978) and further developed by Jonasson (1994) was used to analyse the students’ contributions in relation to four key thematic outcomes including: 1) Complexities in representations across contexts; 2) Critical evaluation in real world scenarios; 3) Reflective practice based on experience, and; 4) Collaborative co-construction of knowledge. Both text and image/visual contributions are provided as examples within each of these learning processes. A theoretical model depicting the interactive learning processes that occurred via discussion of the textual and visual stimulus is presented.
Resumo:
A decision is a commitment to a proposition or plan of action based on evidence and the expected costs and benefits associated with the outcome. Progress in a variety of fields has led to a quantitative understanding of the mechanisms that evaluate evidence and reach a decision. Several formalisms propose that a representation of noisy evidence is evaluated against a criterion to produce a decision. Without additional evidence, however, these formalisms fail to explain why a decision-maker would change their mind. Here we extend a model, developed to account for both the timing and the accuracy of the initial decision, to explain subsequent changes of mind. Subjects made decisions about a noisy visual stimulus, which they indicated by moving a handle. Although they received no additional information after initiating their movement, their hand trajectories betrayed a change of mind in some trials. We propose that noisy evidence is accumulated over time until it reaches a criterion level, or bound, which determines the initial decision, and that the brain exploits information that is in the processing pipeline when the initial decision is made to subsequently either reverse or reaffirm the initial decision. The model explains both the frequency of changes of mind as well as their dependence on both task difficulty and whether the initial decision was accurate or erroneous. The theoretical and experimental findings advance the understanding of decision-making to the highly flexible and cognitive acts of vacillation and self-correction.
Resumo:
Objective assessment of animal personality is typically time consuming, requiring the repeated measure of behavioural responses. By contrast, subjective assessment of personality allows information to be collected quickly by experienced caregivers. However, subjective assessment must predict behaviour to be valid. Comparisons of subjective assessments and behaviour have been made but often with methodological weaknesses and thus, limited success. Here we test the validity of a subjective assessment against a battery of behaviour tests in 146 horses (Equus caballus). Our first aim was to determine if subjective personality assessment could predict behaviour during behaviour testing. We made specific a priori predictions for how subjectively measured personality should relate to behaviour testing. We found that Extroversion predicted time to complete a handling test and refusal behaviour during this test. It also predicted minimum distance to a novel object. Neuroticism predicted how reactive an individual was to a sudden visual stimulus but not how quickly it recovered from this. Agreeableness did not predict any behaviour during testing. There were several unpredicted correlations between subjective measures and behaviour tests which we explore further. Our second aim was to combine data from the subjective assessment and behaviour tests to gain a more comprehensive understanding of personality. We found that the combination of methods provides new insights into horse behaviour. Furthermore, our data are consistent with the idea of horses showing different coping styles, a novel finding for this species. © 2013 Elsevier B.V.
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
The present study explored processing strategies used by individuals when they begin to read c;l script. Stimuli were artificial words created from symbols and based on an alphabetic system. The words were.presented to Grade Nine and Ten students, with variations included in the difficulty of orthography and word familiarity, and then scores were recorded on the mean number of trials for defined learning variables. Qualitative findings revealed that subjects 1 earned parts of the visual a'nd auditory features of words prior to hooking up the visual stimulus to the word's name. Performance measures-which appear to affect the rate of learning were as follows: auditory short-term memory, auditory delayed short-term memory, visual delayed short- term memory, and word attack or decod~ng skills. Qualitative data emerging in verbal reports by the subjects revealed that strategies they pefceived to use were, graphic, phonetic decoding and word .reading.
L’intégration de la prise de décision visuo-motrice et d’action motrice dans des conditions ambiguës
Resumo:
La prise de décision est un mécanisme qui fait intervenir les structures neuronales supérieures afin d’effectuer un lien entre la perception du signal et l’action. Plusieurs travaux qui cherchent à comprendre les mécanismes de la prise de décision sont menés à divers ni- veaux allant de l’analyse comportementale cognitive jusqu'à la modélisation computationnelle. Le but de ce projet a été d’évaluer d’un instant à l’autre comment la variabilité du signal observé («bruit»), influence la capacité des sujets humains à détecter la direction du mouvement dans un stimulus visuel. Dans ces travaux, nous avons éliminé l’une des sources potentielles de variabilité, la variabilité d’une image à l’autre, dans le nombre de points qui portaient les trois signaux de mouvements cohérents (gauche, droite, et aléatoire) dans les stimuli de Kinématogramme de points aléatoires (KPA), c’est-à-dire la variabilité d’origine périphérique. Les stimuli KPA de type « V6 » étaient des stimuli KPA standard avec une variabilité instantanée du signal, et par contre les stimuli KPA de type « V8 », étaient modifiés pour éliminer la variabilité stochastique due à la variabilité du nombre de pixels d’un instant à l’autre qui portent le signal cohérent. Si la performance des sujets, qui correspond à leur temps de réaction et au nombre de bonnes réponses, diffère en réponse aux stimuli dont le nombre de points en mouvement cohérent varie (V6) ou ne varie pas (V8), ceci serait une preuve que la variabilité d’origine périphérique modulerait le processus décisionnel. Par contre, si la performance des sujets ne diffère pas entre ces deux types de stimuli, ceci serait une preuve que la source majeure de variabilité de performance est d’origine centrale. Dans nos résultats nous avons constaté que le temps de réaction et le nombre de bonnes réponses sont modulés par la preuve nette du mouvement cohérent. De plus on a pu établir qu’en éliminant la variabilité d’origine périphérique définit ci-dessus, on n’observe pas réellement de modification dans les enregistrements. Ce qui nous à amené à penser qu’il n y a pas de distinction claire entre la distribution des erreurs et les bonnes réponses effectuées pour chacun des essais entre les deux stimuli que nous avons utilisé : V6 et V8. C’est donc après avoir mesuré la « quantité d’énergie » que nous avons proposé que la variabilité observée dans les résultats serait probablement d’origine centrale.
Resumo:
If people monitor a visual stimulus stream for targets they often miss the second (T2) if it appears soon after the first (T1)-the attentional blink. There is one exception: T2 is often not missed if it appears right after T1, i.e., at lag 1. This lag-l sparing is commonly attributed to the possibility that T1 processing opens an attentional gate, which may be so sluggish that an early T2 can slip in before it closes. We investigated why the gate may close and exclude further stimuli from processing. We compared a control approach, which assumes that gate closing is exogenously triggered by the appearance of nontargets, and an integration approach, which assumes that gate closing is under endogenous control. As predicted by the latter but not the former, T2 performance and target reversals were strongly affected by the temporal distance between T1 and T2, whereas the presence or the absence of a nontarget intervening between T1 and T2 had little impact. (c) 2005 Elsevier B.V. All rights reserved.