919 resultados para Speech Motor Control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Unilateral ischemic stroke disrupts the well balanced interactions within bilateral cortical networks. Restitution of interhemispheric balance is thought to contribute to post-stroke recovery. Longitudinal measurements of cerebral blood flow (CBF) changes might act as surrogate marker for this process. OBJECTIVE To quantify longitudinal CBF changes using arterial spin labeling MRI (ASL) and interhemispheric balance within the cortical sensorimotor network and to assess their relationship with motor hand function recovery. METHODS Longitudinal CBF data were acquired in 23 patients at 3 and 9 months after cortical sensorimotor stroke and in 20 healthy controls using pulsed ASL. Recovery of grip force and manual dexterity was assessed with tasks requiring power and precision grips. Voxel-based analysis was performed to identify areas of significant CBF change. Region-of-interest analyses were used to quantify the interhemispheric balance across nodes of the cortical sensorimotor network. RESULTS Dexterity was more affected, and recovered at a slower pace than grip force. In patients with successful recovery of dexterous hand function, CBF decreased over time in the contralesional supplementary motor area, paralimbic anterior cingulate cortex and superior precuneus, and interhemispheric balance returned to healthy control levels. In contrast, patients with poor recovery presented with sustained hypoperfusion in the sensorimotor cortices encompassing the ischemic tissue, and CBF remained lateralized to the contralesional hemisphere. CONCLUSIONS Sustained perfusion imbalance within the cortical sensorimotor network, as measured with task-unrelated ASL, is associated with poor recovery of dexterous hand function after stroke. CBF at rest might be used to monitor recovery and gain prognostic information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schizophrenia patients are severely impaired in nonverbal communication, including social perception and gesture production. However, the impact of nonverbal social perception on gestural behavior remains unknown, as is the contribution of negative symptoms, working memory, and abnormal motor behavior. Thus, the study tested whether poor nonverbal social perception was related to impaired gesture performance, gestural knowledge, or motor abnormalities. Forty-six patients with schizophrenia (80%), schizophreniform (15%), or schizoaffective disorder (5%) and 44 healthy controls matched for age, gender, and education were included. Participants completed 4 tasks on nonverbal communication including nonverbal social perception, gesture performance, gesture recognition, and tool use. In addition, they underwent comprehensive clinical and motor assessments. Patients presented impaired nonverbal communication in all tasks compared with controls. Furthermore, in contrast to controls, performance in patients was highly correlated between tasks, not explained by supramodal cognitive deficits such as working memory. Schizophrenia patients with impaired gesture performance also demonstrated poor nonverbal social perception, gestural knowledge, and tool use. Importantly, motor/frontal abnormalities negatively mediated the strong association between nonverbal social perception and gesture performance. The factors negative symptoms and antipsychotic dosage were unrelated to the nonverbal tasks. The study confirmed a generalized nonverbal communication deficit in schizophrenia. Specifically, the findings suggested that nonverbal social perception in schizophrenia has a relevant impact on gestural impairment beyond the negative influence of motor/frontal abnormalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stroke is one of the most common conditions requiring rehabilitation, and its motor impairments are a major cause of permanent disability. Hemiparesis is observed by 80% of the patients after acute stroke. Neuroimaging studies showed that real and imagined movements have similarities regarding brain activation, supplying evidence that those similarities are based on the same process. Within this context, the combination of MP with physical and occupational therapy appears to be a natural complement based on neurorehabilitation concepts. Our study seeks to investigate if MP for stroke rehabilitation of upper limbs is an effective adjunct therapy. PubMed (Medline), ISI knowledge (Institute for Scientific Information) and SciELO (Scientific Electronic Library) were terminated on 20 February 2015. Data were collected on variables as follows: sample size, type of supervision, configuration of mental practice, setting the physical practice (intensity, number of sets and repetitions, duration of contractions, rest interval between sets, weekly and total duration), measures of sensorimotor deficits used in the main studies and significant results. Random effects models were used that take into account the variance within and between studies. Seven articles were selected. As there was no statistically significant difference between the two groups (MP vs Control), showed a – 0.6 (95% CI: –1.27 to 0.04), for upper limb motor restoration after stroke. The present meta-analysis concluded that MP is not effective as adjunct therapeutic strategy for upper limb motor restoration after stroke.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current study we investigated whether ego depletion negatively affects attention regulation under pressure in sports by assessing participants' dart throwing performance and accompanying gaze behavior. According to the strength model of self-control, the most important aspect of self-control is attention regulation. Because higher levels of state anxiety are associated with impaired attention regulation, we chose a mixed design with ego depletion (yes vs. no) as between-subjects and anxiety level (high vs. low) as within-subjects factor. Participants performed a perceptual-motor task requiring selective attention, namely, dart throwing. In line with our expectations, depleted participants in the high-anxiety condition performed worse and displayed a shorter final fixation on bull's eye, demonstrating that when one's self-control strength is depleted, attention regulation under pressure cannot be maintained. This is the first study that directly supports the general assumption that ego depletion is a major factor in influencing attention regulation under pressure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present article, we argue that it may be fruitful to incorporate the ideas of the strength model of self-control into the core assumptions of the well-established attentional control theory (ACT). In ACT, it is assumed that anxiety automatically leads to attention disruption and increased distractibility, which may impair subsequent cognitive or perceptual-motor performance, but only if individuals do not have the ability to counteract this attention disruption. However, ACT does not clarify which process determines whether one can volitionally regulate attention despite experiencing high levels of anxiety. In terms of the strength model of self-control, attention regulation can be viewed as a self-control act depending on the momentary availability of self-control strength. We review literature that has revealed that self-control strength moderates the anxiety-performance relationship, discuss how to integrate these two theoretical models, and offer practical recommendations of how to counteract negative anxiety effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study we investigated whether ego depletion negatively affects attention regulation under pressure in sports by assessing participants’ dart throwing performance and accompanying gaze behavior. According to the strength model of self-control the most important aspect of self-control is attention regulation (Schmeichel & Baumeister, 2010). As higher levels of state anxiety are associated with impaired attention regulation (Nieuwenhuys & Oudejans, 2012) we chose a mixed design with ego depletion (yes vs. no) as between-subjects and anxiety level (high vs. low) as within-subjects factor. A total of 28 right-handed students participated in our study (Mage = 23.4, SDage = 2.5; 10 female; no professional dart experience). Participants performed a perceptual-motor task requiring selective attention, namely, dart throwing. The task was performed while participants were positioned high and low on a climbing wall (i.e., with high and low levels of anxiety). In line with our expectations, a mixed-design ANOVA revealed that depleted participants in the high anxiety condition performed worse (p < .001) and displayed a shorter final fixation on bull’s eye (p < .01) than in the low anxiety condition, demonstrating that when one is depleted attention regulation under pressure cannot be maintained. This is the first study that directly supports the general assumption that ego depletion is a major factor in influencing attention regulation under pressure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. METHOD: Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. RESULTS: In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. CONCLUSION: Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study provided further information about stuttering among bilingual populations and attempted to assess the significance of repeated oral-motor movements during an adaptation task in two bilingual adults. This was accomplished by requesting that bilingual people who stutter to complete an adaptation task of the same written passage in two different languages. Explored was the following research question: In bilingual speakers who stutter, what is the effect of altering the oral-motor movements by changing the language of the passage read during an adaptation task? Two bilingual adults were each requested to complete an adaptation task consisting of 10 readings in two separate conditions. Participants 1 and 2 completed two conditions, each of which contained a separate passage. Condition B consisted of an adaptation procedure in which the participants read five successive readings in English followed by five additional successive readings in Language 1 (L1). Following the completion of the first randomly assigned condition, the participant was given a rest period of 30 minutes before beginning the remaining condition and passage. Condition A consisted of an adaptation procedure in which the participants read five successive readings in L1 followed by five additional successive readings in English. Results across participants, conditions, and languages indicated an atypical adaptation curve over 10 readings characterized by a dramatic increase in stuttering following a change of language. Closer examination of individual participants revealed differences in stuttering and adaptation among languages and conditions. Participants 1 and 2 demonstrated differences in adaptation and stuttering among languages. Participant 1 demonstrated an increase in stuttering following a change in language read in Condition B and a decrease in stuttering following a change in language read in Condition A. It is speculated that language proficiency contributed to the observed differences in stuttering following a change of language. Participant 2 demonstrated an increase in stuttering following a change in language read in Condition A and a minimal increase in stuttering following a change in language read in Condition B. It is speculated that a change in the oral-motor plan contributed to the increase in stuttering in Condition A. Collectively, findings from this exploratory study lend support to an interactive effect between language proficiency and a change in the oral-motor plan contributing to increased stuttering following a change of language during an adaptation task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The respiratory central pattern generator is a collection of medullary neurons that generates the rhythm of respiration. The respiratory central pattern generator feeds phrenic motor neurons, which, in turn, drive the main muscle of respiration, the diaphragm. The purpose of this thesis is to understand the neural control of respiration through mathematical models of the respiratory central pattern generator and phrenic motor neurons. ^ We first designed and validated a Hodgkin-Huxley type model that mimics the behavior of phrenic motor neurons under a wide range of electrical and pharmacological perturbations. This model was constrained physiological data from the literature. Next, we designed and validated a model of the respiratory central pattern generator by connecting four Hodgkin-Huxley type models of medullary respiratory neurons in a mutually inhibitory network. This network was in turn driven by a simple model of an endogenously bursting neuron, which acted as the pacemaker for the respiratory central pattern generator. Finally, the respiratory central pattern generator and phrenic motor neuron models were connected and their interactions studied. ^ Our study of the models has provided a number of insights into the behavior of the respiratory central pattern generator and phrenic motor neurons. These include the suggestion of a role for the T-type and N-type calcium channels during single spikes and repetitive firing in phrenic motor neurons, as well as a better understanding of network properties underlying respiratory rhythm generation. We also utilized an existing model of lung mechanics to study the interactions between the respiratory central pattern generator and ventilation. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although there has been a lot of interest in recognizing and understanding air traffic control (ATC) speech, none of the published works have obtained detailed field data results. We have developed a system able to identify the language spoken and recognize and understand sentences in both Spanish and English. We also present field results for several in-tower controller positions. To the best of our knowledge, this is the first time that field ATC speech (not simulated) is captured, processed, and analyzed. The use of stochastic grammars allows variations in the standard phraseology that appear in field data. The robust understanding algorithm developed has 95% concept accuracy from ATC text input. It also allows changes in the presentation order of the concepts and the correction of errors created by the speech recognition engine improving it by 17% and 25%, respectively, absolute in the percentage of fully correctly understood sentences for English and Spanish in relation to the percentages of fully correctly recognized sentences. The analysis of errors due to the spontaneity of the speech and its comparison to read speech is also carried out. A 96% word accuracy for read speech is reduced to 86% word accuracy for field ATC data for Spanish for the "clearances" task confirming that field data is needed to estimate the performance of a system. A literature review and a critical discussion on the possibilities of speech recognition and understanding technology applied to ATC speech are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is part of an on-going collaborative project between the medical and signal processing communities to promote new research efforts on automatic OSA (Obstructive Apnea Syndrome) diagnosis. In this paper, we explore the differences noted in phonetic classes (interphoneme) across groups (control/apnoea) and analyze their utility for OSA detection

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications.