998 resultados para visual timing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this road-crossing simulation study, we assessed both participant's ability to visually judge whether or not they could cross a road, and their adaptive walking behavior. To this end, participants were presented with a road inside the laboratory on which a bike approached with different velocities from different distances. Eight children aged 5-7, ten children aged 10-12, and ten adults were asked both to verbally judge whether they could cross the road, and to actually walk across the road if possible. The results indicated that the verbal judgments were not similar to judgments to actually cross the road. With respect to safety and accuracy of judgments, groups did not differ from each other, although the youngest group tended to be more cautious. All groups appeared to use a strategy to cross the road based both on the distance and the velocity of the approaching bike. Young children waited longer on the curb before crossing the road than older children and adults. All groups adjusted their crossing time to the time-to-arrival of the bike. These findings are discussed in relation to the ecological psychological approach and the putative dissociation between vision for perception (i.e. verbal judgment) and vision for action (i.e. actual crossing). (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT are functionally linked and temporally synchronized during time encoding whereas they are functionally independent and operate serially (V1 followed by V5/MT) while maintaining temporal information in working memory. These data challenge the traditional view of V1 and V5/MT as visuo-spatial features detectors and highlight the functional contribution and the temporal dynamics of these brain regions in the processing of time in millisecond range. The present project resulted in the paper entitled: 'How the visual brain encodes and keeps track of time' by Paolo Salvioni, Lysiann Kalmbach, Micah Murray and Domenica Bueti that is now submitted for publication to the Journal of Neuroscience.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This investigation aimed to pinpoint the elements of motor timing control that are responsible for the increased variability commonly found in children with developmental dyslexia on paced or unpaced motor timing tasks (Chapter 3). Such temporal processing abilities are thought to be important for developing the appropriate phonological representations required for the development of literacy skills. Similar temporal processing difficulties arise in other developmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD). Motor timing behaviour in developmental populations was examined in the context of models of typical human timing behaviour, in particular the Wing-Kristofferson model, allowing estimation of the contribution of different timing control systems, namely timekeeper and implementation systems (Chapter 2 and Methods Chapters 4 and 5). Research examining timing in populations with dyslexia and ADHD has been inconsistent in the application of stimulus parameters and so the first investigation compared motor timing behaviour across different stimulus conditions (Chapter 6). The results question the suitability of visual timing tasks which produced greater performance variability than auditory or bimodal tasks. Following an examination of the validity of the Wing-Kristofferson model (Chapter 7) the model was applied to time series data from an auditory timing task completed by children with reading difficulties and matched control groups (Chapter 8). Expected group differences in timing performance were not found, however, associations between performance and measures of literacy and attention were present. Results also indicated that measures of attention and literacy dissociated in their relationships with components of timing, with literacy ability being correlated with timekeeper variance and attentional control with implementation variance. It is proposed that these timing deficits associated with reading difficulties are attributable to central timekeeping processes and so the contribution of error correction to timing performance was also investigated (Chapter 9). Children with lower scores on measures of literacy and attention were found to have a slower or failed correction response to phase errors in timing behaviour. Results from the series of studies suggest that the motor timing difficulty in poor reading children may stem from failures in the judgement of synchrony due to greater tolerance of uncertainty in the temporal processing system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Time perception is used in our day-to-day activities. While we understand quite well how our brain processes vision, touch or taste, brain mechanisms subserving time perception remain largely understudied. In this study, we extended an experiment of previous master thesis run by Tatiana Kenel-Pierre. We focused on time perception in the range of milliseconds. Previous studies have demonstrated the involvement of visual areas V1 and V5/MT in the encoding of temporal information of visual stimuli. Based on these previous findings the aim of the present study was to understand if temporal information was encoded in V1 and extrastriate area V5/MT in different spatial frames i.e., head- centered versus eye-centered. To this purpose we asked eleven healthy volunteers to perform a temporal discrimination task of visual stimuli. Stimuli were presented at 4 different spatial positions (i.e., different combinations of retinotopic and spatiotopic position). While participants were engaged in this task we interfered with the activity of the right dorsal V1 and the right V5/MT with transcranial magnetic stimulation (TMS). Our preliminary results showed that TMS over both V1 and V5/MT impaired temporal discrimination of visual stimuli presented at specific spatial coordinates. But whereas TMS over V1 impaired temporal discrimination of stimuli presented in the lower left quadrant, TMS over V5/MT affected temporal discrimination of stimuli presented at the top left quadrant. Although it is always difficult to draw conclusions from preliminary results, we could tentatively say that our data seem to suggest that both V1 and V5/MT encode visual temporal information in specific spatial frames.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks which are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty- one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated the effects of augmented prenatal auditory stimulation on postnatal visual responsivity and neural organization in bobwhite quail (Colinus virginianus). I delivered conspecific embryonic vocalizations before, during, or after the development of a multisensory, midbrain audiovisual area, the optic tectum. Postnatal simultaneous choice tests revealed that hatchlings receiving augmented auditory stimulation during optic tectum development as embryos failed to show species-typical visual preferences for a conspecific maternal hen 72 hours after hatching. Auditory simultaneous choice tests showed no hatchlings had deficits in auditory function in any of the groups, indicating deficits were specific to visual function. ZENK protein expression confirmed differences in the amount of neural plasticity in multiple neuroanatomical regions of birds receiving stimulation during optic tecturn development, compared to unmanipulated birds. The results of these experiments support the notion that the timing of augmented prenatal auditory stimulation relative to optic tectum development can impact postnatal perceptual organization in an enduring way.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different interceptive tasks and modes of interception (hitting or capturing) do not necessarily involve similar control processes. Control based on preprogramming of movement parameters is possible for actions with brief movement times but is now widely rejected; continuous perceptuomotor control models are preferred for all types of interception. The rejection of preprogrammed control and acceptance of continuous control is evaluated for the timing of rapidly executed, manual hitting actions. It is shown that a preprogrammed control model is capable of providing a convincing account of observed behavior patterns that avoids many of the arguments that have been raised against it. Prominent continuous perceptual control models are analyzed within a common framework and are shown to be interpretable as feedback control strategies. Although these models can explain observations of on-line adjustments to movement, they offer only post hoc explanations for observed behavior patterns in hitting tasks and are not directly supported by data. It is proposed that rapid manual hitting tasks make up a class of interceptions for which a preprogrammed strategy is adopted-a strategy that minimizes the role of visual feedback. Such a strategy is effective when the task demands a high degree of temporal accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phototransduction in vertebrate photoreceptor cells represents a paradigm of signaling pathways mediated by G-protein-coupled receptors (GPCRs), which share common modules linking the initiation of the cascade to the final response of the cell. In this work, we focused on the recovery phase of the visual photoresponse, which is comprised of several interacting mechanisms. We employed current biochemical knowledge to investigate the response mechanisms of a comprehensive model of the visual phototransduction pathway. In particular, we have improved the model by implementing a more detailed representation of the recoverin (Rec)-mediated calcium feedback on rhodopsin kinase and including a dynamic arrestin (Arr) oligomerization mechanism. The model was successfully employed to investigate the rate limiting steps in the recovery of the rod photoreceptor cell after illumination. Simulation of experimental conditions in which the expression levels of rhodospin kinase (RK), of the regulator of the G-protein signaling (RGS), of Arr and of Rec were altered individually or in combination revealed severe kinetic constraints to the dynamics of the overall network. Our simulations confirm that RGS-mediated effector shutdown is the rate-limiting step in the recovery of the photoreceptor and show that the dynamic formation and dissociation of Arr homodimers and homotetramers at different light intensities significantly affect the timing of rhodopsin shutdown. The transition of Arr from its oligomeric storage forms to its monomeric form serves to temper its availability in the functional state. Our results may explain the puzzling evidence that overexpressing RK does not influence the saturation time of rod cells at bright light stimuli. The approach presented here could be extended to the study of other GPCR signaling pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the greatest conundrums to the contemporary science is the relation between consciousness and brain activity, and one of the specifi c questions is how neural activity can generate vivid subjective experiences. Studies focusing on visual consciousness have become essential in solving the empirical questions of consciousness. Th e main aim of this thesis is to clarify the relation between visual consciousness and the neural and electrophysiological processes of the brain. By applying electroencephalography and functional magnetic resonance image-guided transcranial magnetic stimulation (TMS), we investigated the links between conscious perception and attention, the temporal evolution of visual consciousness during stimulus processing, the causal roles of primary visual cortex (V1), visual area 2 (V2) and lateral occipital cortex (LO) in the generation of visual consciousness and also the methodological issues concerning the accuracy of targeting TMS to V1. Th e results showed that the fi rst eff ects of visual consciousness on electrophysiological responses (about 140 ms aft er the stimulus-onset) appeared earlier than the eff ects of selective attention, and also in the unattended condition, suggesting that visual consciousness and selective attention are two independent phenomena which have distinct underlying neural mechanisms. In addition, while it is well known that V1 is necessary for visual awareness, the results of the present thesis suggest that also the abutting visual area V2 is a prerequisite for conscious perception. In our studies, the activation in V2 was necessary for the conscious perception of change in contrast for a shorter period of time than in the case of more detailed conscious perception. We also found that TMS in LO suppressed the conscious perception of object shape when TMS was delivered in two distinct time windows, the latter corresponding with the timing of the ERPs related to the conscious perception of coherent object shape. Th e result supports the view that LO is crucial in conscious perception of object coherency and is likely to be directly involved in the generation of visual consciousness. Furthermore, we found that visual sensations, or phosphenes, elicited by the TMS of V1 were brighter than identically induced phosphenes arising from V2. Th ese fi ndings demonstrate that V1 contributes more to the generation of the sensation of brightness than does V2. Th e results also suggest that top-down activation from V2 to V1 is probably associated with phosphene generation. The results of the methodological study imply that when a commonly used landmark (2 cm above the inion) is used in targeting TMS to V1, the TMS-induced electric fi eld is likely to be highest in dorsal V2. When V1 was targeted according to the individual retinotopic data, the electric fi eld was highest in V1 only in half of the participants. Th is result suggests that if the objective is to study the role of V1 with TMS methodology, at least functional maps of V1 and V2 should be applied with computational model of the TMS-induced electric fi eld in V1 and V2. Finally, the results of this thesis imply that diff erent features of attention contribute diff erently to visual consciousness, and thus, the theoretical model which is built up of the relationship between visual consciousness and attention should acknowledge these diff erences. Future studies should also explore the possibility that visual consciousness consists of several processing stages, each of which have their distinct underlying neural mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, three main questions were addressed using event-related potentials (ERPs): (1) the timing of lexical semantic access, (2) the influence of "top-down" processes on visual word processing, and (3) the influence of "bottom-up" factors on visual word processing. The timing of lexical semantic access was investigated in two studies using different designs. In Study 1,14 participants completed two tasks: a standard lexical decision (LD) task which required a word/nonword decision to each target stimulus, and a semantically primed version (LS) of it using the same category of words (e.g., animal) within each block following which participants made a category judgment. In Study 2, another 12 participants performed a standard semantic priming task, where target stimulus words (e.g., nurse) could be either semantically related or unrelated to their primes (e.g., doctor, tree) but the order of presentation was randomized. We found evidence in both ERP studies that lexical semantic access might occur early within the first 200 ms (at about 170 ms for Study 1 and at about 160 ms for Study 2). Our results were consistent with more recent ERP and eye-tracking studies and are in contrast with the traditional research focus on the N400 component. "Top-down" processes, such as a person's expectation and strategic decisions, were possible in Study 1 because of the blocked design, but they were not for Study 2 with a randomized design. Comparing results from two studies, we found that visual word processing could be affected by a person's expectation and the effect occurred early at a sensory/perceptual stage: a semantic task effect in the PI component at about 100 ms in the ERP was found in Study 1 , but not in Study 2. Furthermore, we found that such "top-down" influence on visual word processing might be mediated through separate mechanisms depending on whether the stimulus was a word or a nonword. "Bottom-up" factors involve inherent characteristics of particular words, such as bigram frequency (the total frequency of two-letter combinations of a word), word frequency (the frequency of the written form of a word), and neighborhood density (the number of words that can be generated by changing one letter of an original word or nonword). A bigram frequency effect was found when comparing the results from Studies 1 and 2, but it was examined more closely in Study 3. Fourteen participants performed a similar standard lexical decision task but the words and nonwords were selected systematically to provide a greater range in the aforementioned factors. As a result, a total of 18 word conditions were created with 18 nonword conditions matched on neighborhood density and neighborhood frequency. Using multiple regression analyses, we foimd that the PI amplitude was significantly related to bigram frequency for both words and nonwords, consistent with results from Studies 1 and 2. In addition, word frequency and neighborhood frequency were also able to influence the PI amplitude separately for words and for nonwords and there appeared to be a spatial dissociation between the two effects: for words, the word frequency effect in PI was found at the left electrode site; for nonwords, the neighborhood frequency effect in PI was fovind at the right elecfrode site. The implications of otir findings are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face.