23 resultados para Sentence prosody
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.
Resumo:
This study aimed to develop a new linguistic based functional magnetic resonance imaging (fMRI)-sentence decision task that reliably detects hemispheric language dominance.
Resumo:
Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.
Resumo:
Identification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.
Resumo:
We investigated the effects of angry prosody, varying focus of attention, and laterality of presentation of angry prosody on peripheral nervous system activity. Participants paid attention to either their left or their right ear while performing a sex discrimination task on dichotically presented pseudo-words. These pseudo-words were characterized by either angry or neutral prosody and presented stereophonically (anger/neutral, neutral/anger, or neutral/neutral, for the left/right ear, respectively). Reaction times and physiological responses (heart period, skin conductance, finger and forehead temperature) in this study were differentially sensitive to the effects of anger versus neutral prosody, varying focus of attention, and laterality of presentation of angry prosody.
Resumo:
PURPOSE: We aimed at further elucidating whether aphasic patients' difficulties in understanding non-canonical sentence structures, such as Passive or Object-Verb-Subject sentences, can be attributed to impaired morphosyntactic cue recognition, and to problems in integrating competing interpretations. METHODS: A sentence-picture matching task with canonical and non-canonical spoken sentences was performed using concurrent eye tracking. Accuracy, reaction time, and eye tracking data (fixations) of 50 healthy subjects and 12 aphasic patients were analysed. RESULTS: Patients showed increased error rates and reaction times, as well as delayed fixation preferences for target pictures in non-canonical sentences. Patients' fixation patterns differed from healthy controls and revealed deficits in recognizing and immediately integrating morphosyntactic cues. CONCLUSION: Our study corroborates the notion that difficulties in understanding syntactically complex sentences are attributable to a processing deficit encompassing delayed and therefore impaired recognition and integration of cues, as well as increased competition between interpretations.
Resumo:
OBJECTIVE: We sought to investigate the activity of bilateral parietal and premotor areas during a Go/No Go paradigm involving praxis movements of the dominant hand. METHODS: A sentence was presented which instructed subjects on what movement to make (S1; for example, "Show me how to use a hammer."). After an 8-s delay, "Go" or "No Go" (S2) was presented. If Go, they were instructed to make the movement described in the S1 instruction sentence as quickly as possible, and continuously until the "Rest" cue was presented 3 s later. If No Go, subjects were to simply relax until the next instruction sentence. Event-related potentials (ERP) and event-related desynchronization (ERD) in the beta band (18-22 Hz) were evaluated for three time bins: after S1, after S2, and from -2.5 to -1.5 s before the S2 period. RESULTS: Bilateral premotor ERP was greater than bilateral parietal ERP after the S2 Go compared with the No Go. Additionally, left premotor ERP was greater than that from the right premotor area. There was predominant left parietal ERD immediately after S1 for both Go and No Go, which was sustained for the duration of the interval between S1 and S2. For both S2 stimuli, predominant left parietal ERD was again seen when compared to that from the left premotor or right parietal area. However, the left parietal ERD was greater for Go than No Go. CONCLUSION: The results suggest a dominant role in the left parietal cortex for planning, executing, and suppressing praxis movements. The ERP and ERD show different patterns of activation and may reflect distinct neural movement-related activities. SIGNIFICANCE: The data can guide further studies to determine the neurophysiological changes occurring in apraxia patients and help explain the unique error profiles seen in patients with left parietal damage.
Resumo:
OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Resumo:
Bone Anchored Hearing Implants (BAHI) are routinely used in patients with conductive or mixed hearing loss, e.g. if conventional air conduction hearing aids cannot be used. New sound processors and new fitting software now allow the adjustment of parameters such as loudness compression ratios or maximum power output separately. Today it is unclear, how the choice of these parameters influences aided speech understanding in BAHI users. In this prospective experimental study, the effect of varying the compression ratio and lowering the maximum power output in a BAHI were investigated. Twelve experienced adult subjects with a mixed hearing loss participated in this study. Four different compression ratios (1.0; 1.3; 1.6; 2.0) were tested along with two different maximum power output settings, resulting in a total of eight different programs. Each participant tested each program during two weeks. A blinded Latin square design was used to minimize bias. For each of the eight programs, speech understanding in quiet and in noise was assessed. For speech in quiet, the Freiburg number test and the Freiburg monosyllabic word test at 50, 65, and 80 dB SPL were used. For speech in noise, the Oldenburg sentence test was administered. Speech understanding in quiet and in noise was improved significantly in the aided condition in any program, when compared to the unaided condition. However, no significant differences were found between any of the eight programs. In contrast, on a subjective level there was a significant preference for medium compression ratios of 1.3 to 1.6 and higher maximum power output.
Resumo:
Medical emergencies on international flights are not uncommon. In these situations the question often arises whether physicians are obliged to render first aid and whether omission leads to legal consequences. The general obligation to aid those in need applies to everyone, not only to physicians. Evading this duty makes liable to prosecution for omittance of defence of a third person in line with Art. 128 of the Swiss Penal Code, punishable by custodial sentence up to three years or an equivalent punitive fine. Vocational and professional law extend the duty to aid for physicians to urgent cases. Although resulting from the performance of a legal obligation, malpractice occurred in the course of first aid can lead to claims for compensation - even from foreign patients, and that according to their own domestic law.
Resumo:
Background: Emotion research in neuroscience targets brain structures and processes involved in discrete emotion categories (e.g. anger, fear, sadness) or dimensions (e.g. valence, arousal, approach-avoidance), and usually relies on carefully controlled experimental paradigms with standardized and often simple emotion-eliciting stimuli like e.g. unpleasant pictures. Emotion research in clinical psychology and psychotherapy is often interested in very subtle differences between emotional states, e.g. differences within emotion categories (e.g. assertive, self-protecting vs. rejecting, protesting anger or specific grief vs. global sadness), and/or the biographical, social, situational, or motivational contexts of the emotional experience, which are desired to be minimized in experimental neuroscientific research. Objective: In order to facilitate the experimental and neurophysiological investigation of psychotherapeutically relevant emotional experiences, the present study aims at developing a priming procedure to induce specific, therapeutically and biographically relevant emotional states under controlled experimental conditions. Methodology: N = 50 participants who reported negative feelings towards another close person were randomly assigned to 2 different conditions. They fulfilled 2 different sentence completion tasks that were supposed to prime either ‘therapeutically productive’ or ‘therapeutically unproductive’ emotional states and completed an expressive writing task and several self-report measures of specific emotion-related constructs. The sentence completion task consisted in max. 22 sentence stems drawn from psychotherapy patients’ statements that have been shown to be typical for productive or unproductive therapy sessions. The subjects of the present study completed these sentence stems with regard to their own negative feelings towards the close person. Results: There were a substantial inter-individual variability concerning the number of completed sentences, and significant correlations between number of completed sentences and problem activation in both conditions. No differences were observed in general mood or problem activation between both groups after priming. Descriptively, there were differences between groups concerning emotion regulation aspects. Significant differences between groups in resolution of negative feelings towards the other person were found. Discussion: The results point in the expected direction, however the small sample sizes (after exclusion of several subjects) and low power hinder the detection of convincing significant effects. More data is needed in order to evaluate the efficacy of this emotional priming procedure.