24 resultados para visual motor integration
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
This study investigated the association between different neonatal ultrasonographic classifications and adolescent cognitive, educational, and behavioral outcomes following very preterm birth. Participants included a group of 120 adolescents who were born very preterm (33 weeks of gestation), subdivided into three groups according to their neonatal cerebral ultrasound (US) classifications: (a) normal (N = 69), (b) periventricular hemorrhage (PVH, N = 37), and (c) PVH with ventricular dilatation (PVH + DIL, N = 14), and 50 controls. The cognitive functions assessed were full-scale IQ, phonological and semantic verbal fluency, and visual-motor integration. Educational outcomes included reading and spelling; behavioral outcomes were assessed with the Rutter Parents' Scale and the Premorbid Adjustment Scale (PAS). Adolescent outcome scores were compared among the four groups. A main effect for group was observed for full-scale IQ, Rutter Parents' Scale total scores, and PAS total scores, after controlling for gestational age, socioeconomic status and gender, with the PVH + DIL group showing the most impaired scores compared to the other groups. The current results demonstrate that routine neonatal ultrasound classifications are associated with later cognitive and behavioral outcome. Neonatal ultrasounds could aid in the identification of subgroups of children who are at increased risk of neurodevelopmental problems. These at risk subgroups could then be referred to appropriate early intervention services.
Resumo:
Cognitive and neurophysiological correlates of arithmetic calculation, concepts, and applications were examined in 41 adolescents, ages 12-15 years. Psychological and task-related EEG measures which correctly distinguished children who scored low vs. high (using a median split) in each arithmetic subarea were interpreted as indicative of processes involved. Calculation was related to visual-motor sequencing, spatial visualization, theta activity measured during visual-perceptual and verbal tasks at right- and left-hemisphere locations, and right-hemisphere alpha activity measured during a verbal task. Performance on arithmetic word problems was related to spatial visualization and perception, vocabulary, and right-hemisphere alpha activity measured during a verbal task. Results suggest a complex interplay of spatial and sequential operations in arithmetic performance, consistent with processing model concepts of lateralized brain function.
Resumo:
One of the first attempts to develop a formal model of depth cue integration is to be found in Maloney and Landy's (1989) "human depth combination rule". They advocate that the combination of depth cues by the visual sysetem is best described by a weighted linear model. The present experiments tested whether the linear combination rule applies to the integration of texture and shading. As would be predicted by a linear combination rule, the weight assigned to the shading cue did vary as a function of its curvature value. However, the weight assigned to the texture cue varied systematically as a function of the curvature value of both cues. Here we descrive a non-linear model which provides a better fit to the data. Redescribing the stimuli in terms of depth rather than curvature reduced the goodness of fit for all models tested. These results support the hypothesis that the locus of cue integration is a curvature map, rather than a depth map. We conclude that the linear comination rule does not generalize to the integration of shading and texture, and that for these cues it is likely that integration occurs after the recovery of surface curvature.
Resumo:
Previous studies have attempted to identify sources of contextual information which can facilitate dual adaptation to two variants of a novel environment, which are normally prone to interference. The type of contextual information previously used can be grouped into two broad categories: that which is arbitrary to the motor system, such as a colour cue, and that which is based on an internal property of the motor system, such as a change in movement effector. The experiments reported here examined whether associating visuomotor rotations to visual targets and movements of different amplitude would serve as an appropriate source of contextual information to enable dual adaptation. The results indicated that visual target and movement amplitude is not a suitable source of contextual information to enable dual adaptation in our task. Interference was observed in groups who were exposed to opposing visuomotor rotations, or a visuomotor rotation and no rotation, both when the onset of the visuomotor rotations was sudden, or occurred gradually over the course of training. Furthermore, the pattern of interference indicated that the inability to dual adapt was a result of the generalisation of learning between the two visuomotor mappings associated with each of the visual target and movement amplitudes. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Augmented visual feedback can have a profound bearing on the stability of bimanual coordination. Indeed, this has been used to render tractable the study of patterns of coordination that cannot otherwise be produced in a stable fashion. In previous investigations (Carson et al. 1999), we have shown that rhythmic movements, brought about by the contraction of muscles on one side of the body, lead to phase-locked changes in the excitability of homologous motor pathways of the opposite limb. The present study was conducted to assess whether these changes are influenced by the presence of visual feedback of the moving limb. Eight participants performed rhythmic flexion-extension movements of the left wrist to the beat of a metronome (1.5 Hz). In 50% of trials, visual feedback of wrist displacement was provided in relation to a target amplitude, defined by the mean movement amplitude generated during the immediately preceding no feedback trial. Motor potentials (MEPs) were evoked in the quiescent muscles of the right limb by magnetic stimulation of the left motor cortex. Consistent with our previous observations, MEP amplitudes were modulated during the movement cycle of the opposite limb. The extent of this modulation was, however, smaller in the presence of visual feedback of the moving limb (FCR omega(2) =0.41; ECR omega(2)=0.29) than in trials in which there was no visual feedback (FCR omega(2)=0.51; ECR omega(2)=0.48). In addition, the relationship between the level of FCR activation and the excitability of the homologous corticospinal pathway of the opposite limb was sensitive to the vision condition; the degree of correlation between the two variables was larger when there was no visual feedback of the moving limb. The results of the present study support the view that increases in the stability of bimanual coordination brought about by augmented feedback may be mediated by changes in the crossed modulation of excitability in homologous motor pathways.
Resumo:
Accurate estimates of the time-to-contact (TTC) of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach) and allocentric (lateral approach) TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot) held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1). Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.
Resumo:
Rapid orientating movements of the eyes are believed to be controlled ballistically. The mechanism underlying this control is thought to involve a comparison between the desired displacement of the eye and an estimate of its actual position (obtained from the integration of the eye velocity signal). This study shows, however, that under certain circumstances fast gaze movements may be controlled quite differently and may involve mechanisms which use visual information to guide movements prospectively. Subjects were required to make large gaze shifts in yaw towards a target whose location and motion were unknown prior to movement onset. Six of those tested demonstrated remarkable accuracy when making gaze shifts towards a target that appeared during their ongoing movement. In fact their level of accuracy was not significantly different from that shown when they performed a 'remembered' gaze shift to a known stationary target (F-3,F-15 = 0.15, p > 0.05). The lack of a stereotypical relationship between the skew of the gaze velocity profile and movement duration indicates that on-line modifications were being made. It is suggested that a fast route from the retina to the superior colliculus could account for this behaviour and that models of oculomotor control need to be updated.
Resumo:
Human motor behaviour is continually modified on the basis of errors between desired and actual movement outcomes. It is emerging that the role played by the primary motor cortex (M1) in this process is contingent upon a variety of factors, including the nature of the task being performed, and the stage of learning. Here we used repetitive TMS to test the hypothesis that M1 is intimately involved in the initial phase of sensorimotor adaptation. Inhibitory theta burst stimulation was applied to M1 prior to a task requiring modification of torques generated about the elbow/forearm complex in response to rotations of a visual feedback display. Participants were first exposed to a 30° clockwise (CW) rotation (Block A), then a 60° counterclockwise rotation (Block B), followed immediately by a second block of 30° CW rotation (A2). In the STIM condition, participants received 20s of continuous theta burst stimulation (cTBS) prior to the initial A Block. In the conventional (CON) condition, no stimulation was applied. The overt characteristics of performance in the two conditions were essentially equivalent with respect to the errors exhibited upon exposure to a new variant of the task. There were however, profound differences between the conditions in the latency of response preparation, and the excitability of corticospinal projections from M1, which accompanied phases of de-adaptation and re-adaptation (during Blocks B and A2). Upon subsequent exposure to the A rotation 24h later, the rate of re-adaptation was lower in the stimulation condition than that present in the conventional condition. These results support the assertion that primary motor cortex assumes a key role in a network that mediates adaptation to visuomotor perturbation, and emphasise that it is engaged functionally during the early phase of learning.
Resumo:
Here we investigated the influence of angular separation between visual and motor targets on concurrent adaptation to two opposing visuomotor rotations. We inferred the extent of generalisation between opposing visuomotor rotations at individual target locations based on whether interference (negative transfer) was present. Our main finding was that dual adaptation occurred to opposing visuomotor rotations when each was associated with different visual targets but shared a common motor target. Dual adaptation could have been achieved either within a single sensorimotor map (i.e. with different mappings associated with different ranges of visual input), or by forming two different internal models (the selection of which would be based on contextual information provided by target location). In the present case, the pattern of generalisation was dependent on the relative position of the visual targets associated with each rotation. Visual targets nearest the workspace of the opposing visuomotor rotation exhibited the most interference (i.e. generalisation). When the minimum angular separation between visual targets was increased, the extent of interference was reduced. These results suggest that the separation in the range of sensory inputs is the critical requirement to support dual adaptation within a single sensorimotor mapping.
Resumo:
A rapidly increasing number of Web databases are now become accessible via
their HTML form-based query interfaces. Query result pages are dynamically generated
in response to user queries, which encode structured data and are displayed for human
use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data
from query result pages is critical for web data integration applications, such as comparison
shopping, meta-search engines etc, and has been intensively studied. A number of approaches
have been proposed. As the structures of Web pages become more and more complex, the
existing approaches start to fail, and most of them do not remove irrelevant contents which
may a®ect the accuracy of data record extraction. We propose an automated approach for
Web data extraction. First, it makes use of visual features and query terms to identify data
sections and extracts data records in these sections. We also represent several content and
visual features of visual blocks in a data section, and use them to ¯lter out noisy blocks.
Second, it measures similarity between data items in di®erent data records based on their
visual and content features, and aligns them into di®erent groups so that the data in the
same group have the same semantics. The results of our experiments with a large set of
Web query result pages in di®erent domains show that our proposed approaches are highly
e®ective.