790 resultados para Movement Sequences
Resumo:
Parkinson's disease (PD) is a neurodegenerative movement disorder primarily due to basal ganglia dysfunction. While much research has been conducted on Parkinsonian deficits in the traditional arena of musculoskeletal limb movement, research in other functional motor tasks is lacking. The present study examined articulation in PD with increasingly complex sequences of articulatory movement. Of interest was whether dysfunction would affect articulation in the same manner as in limb-movement impairment. In particular, since very Similar (homogeneous) articulatory sequences (the tongue twister effect) are more difficult for healthy individuals to achieve than dissimilar (heterogeneous) gestures, while the reverse may apply for skeletal movements in PD, we asked which factor would dominate when PD patients articulated various grades of artificial tongue twisters: the influence of disease or a possible difference between the two motor systems. Execution was especially impaired when articulation involved a sequence of motor program heterogeneous in terms of place of articulation. The results are suggestive of a hypokinesic tendency in complex sequential articulatory movement as in limb movement. It appears that PD patients do show abnormalities in articulatory movement which are similar to those of the musculoskeletal system. The present study suggests that an underlying disease effect modulates movement impairment across different functional motor systems. (C) 1998 Academic Press.
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.
Resumo:
Two experiments examined imitation of lateralised body movement sequences presented at six viewing angles (0º, 60º, 120º, 180º, 240º, and 300º rotation relative to the participant’s body). Experiment 1 found that, when participants were instructed simply to ‘‘do what the model does’’, at all viewing angles they produced more actions using the same side of the body as the model (anatomical matches), than actions using the opposite side (anatomical non-matches). In Experiment 2 participants were instructed to produce either anatomical matches or anatomical non-matches of observed actions. When the model was viewed from behind (0º), the anatomically matching group were more accurate than the anatomically non-matching group, but the non-matching group was superior when the model faced the participant (180º and 240º). No reliable differences were observed between groups at 60º, 120º, and 300º. In combination, the results of Experiments 1 and 2 suggest that, when they are confronting a model, people choose to imitate the hard way; they attempt to match observed actions anatomically, in spite of the fact that anatomical matching is more subject to error than anatomical non-matching.
Resumo:
What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.
Resumo:
Acuity for elbow joint position sense (JPS) is reduced when head position is modified. Movement of the head is associated with biomechanical changes in the neck and shoulder musculoskeletal system, which may explain changes in elbow JPS. The present study aimed to determine whether elbow JPS is also influenced by illusory changes in head position. Simultaneous vibration of sternocleidomastoid (SCM) and the contralateral splenius was applied to 14 healthy adult human subjects. Muscle vibration or passive head rotation was introduced between presentation and reproduction of a target elbow position. Ten out of 14 subjects reported illusions consistent with lengthening of the vibrated muscles. In these 10 subjects, absolute error for elbow JPS increased with left SCM/right splenius vibration but not with right SCM/left splenius vibration. Absolute error also increased with right rotation, with a trend for increased error with left rotation. These results demonstrated that both actual and illusory changes in head position are associated with diminished acuity for elbow JPS, suggesting that the influence of head position on upper limb JPS depends, at least partially, on perceived head position.
Resumo:
Self controlling practice implies a process of decision making which suggests that the options in a self controlled practice condition could affect learners The number of task components with no fixed position in a movement sequence may affect the (Nay learners self control their practice A 200 cm coincident timing track with 90 light emitting diodes (LEDs)-the first and the last LEDs being the warning and the target lights respectively was set so that the apparent speed of the light along the track was 1 33 m/sec Participants were required to touch six sensors sequentially the last one coincidently with the lighting of the tar get light (timing task) Group 1 (n=55) had only one constraint and were instructed to touch the sensors in any order except for the last sensor which had to be the one positioned close to the target light Group 2 (n=53) had three constraints the first two and the last sensor to be touched Both groups practiced the task until timing error was less than 30 msec on three consecutive trials There were no statistically significant differences between groups in the number of trials needed to reach the performance criterion but (a) participants in Group 2 created fewer sequences corn pared to Group 1 and (b) were more likely to use the same sequence throughout the learning process The number of options for a movement sequence affected the way learners self-controlled their practice but had no effect on the amount of practice to reach criterion performance.
Resumo:
This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Many of our everyday tasks require the control of the serial order and the timing of component actions. Using the dynamic neural field (DNF) framework, we address the learning of representations that support the performance of precisely time action sequences. In continuation of previous modeling work and robotics implementations, we ask specifically the question how feedback about executed actions might be used by the learning system to fine tune a joint memory representation of the ordinal and the temporal structure which has been initially acquired by observation. The perceptual memory is represented by a self-stabilized, multi-bump activity pattern of neurons encoding instances of a sensory event (e.g., color, position or pitch) which guides sequence learning. The strength of the population representation of each event is a function of elapsed time since sequence onset. We propose and test in simulations a simple learning rule that detects a mismatch between the expected and realized timing of events and adapts the activation strengths in order to compensate for the movement time needed to achieve the desired effect. The simulation results show that the effector-specific memory representation can be robustly recalled. We discuss the impact of the fast, activation-based learning that the DNF framework provides for robotics applications.
Resumo:
Children were afforded the opportunity to control the order of repetitions for three novel spatiotemporal sequences. The following was predicted: a) children and adults in the self-regulated (SELF) groups would produce faster movement (MT) and reaction times (R T) and greater recall success (RS) during retention compared to the age-matched yoked (YOKE) groups; b) children would choose to switch sequences less often than adults; c) adults would produce faster MT and RT and greater RS than the children during acquisition and retention, independent of experimental group. During acquisition, no effects were seen for RS, however for MT and RT there was a main effect for age as well as block. During retention a main effect for practice condition was seen for RS and failed to reach statistical significance for MT and RT, thus partially supporting our first and second hypotheses. The third hypothesis was not supported.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.
Resumo:
Processing efficiency theory predicts that anxiety reduces the processing capacity of working memory and has detrimental effects on performance. When tasks place little demand on working memory, the negative effects of anxiety can be avoided by increasing effort. Although performance efficiency decreases, there is no change in performance effectiveness. When tasks impose a heavy demand on working memory, however, anxiety leads to decrements in efficiency and effectiveness. These presumptions were tested using a modified table tennis task that placed low (LWM) and high (HWM) demands on working memory. Cognitive anxiety was manipulated through a competitive ranking structure and prize money. Participants' accuracy in hitting concentric circle targets in predetermined sequences was taken as a measure of performance effectiveness, while probe reaction time (PRT), perceived mental effort (RSME), visual search data, and arm kinematics were recorded as measures of efficiency. Anxiety had a negative effect on performance effectiveness in both LWM and HWM tasks. There was an increase in frequency of gaze and in PRT and RSME values in both tasks under high vs. low anxiety conditions, implying decrements in performance efficiency. However, participants spent more time tracking the ball in the HWM task and employed a shorter tau margin when anxious. Although anxiety impaired performance effectiveness and efficiency, decrements in efficiency were more pronounced in the HWM task than in the LWM task, providing support for processing efficiency theory.