67 resultados para Linguistic Humor

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To clarify the circumstances of death, the degree of inebriation is of importance in many cases, but for several reasons the determination of the ethanol concentration in post-mortem samples can be challenging and the synopsis of ethanol and the direct consumption markers ethyl glucuronide (EtG) and ethyl sulphate (EtS) has proved to be useful. The use of a rather stable matrix like vitreous humor offers further advantages. The aim of this study was to determine the concentrations of ethanol and the biomarkers in the robust matrix of vitreous humor and to compare them with the respective levels in peripheral venous blood and urine. Samples of urine, blood from the femoral vein and vitreous humor were taken from 26 deceased with suspected ethanol consumption prior to death and analyzed for ethanol, EtS and EtG. In the urine samples creatinine was also determined. The personal data, the circumstances of death, the post-mortem interval and the information about ethanol consumption prior to death were recorded. EtG and EtS analysis in urine was performed by LC-ESI-MS/MS, creatinine concentration was determined using the Jaffé reaction and ethanol was detected by HS-GC-FID and by an ADH-based method. In general, the highest concentrations of the analytes were found in urine and showed statistical significance. The mean concentrations of EtG were 62.8mg/L (EtG100 206.5mg/L) in urine, 4.3mg/L in blood and 2.1mg/L in vitreous humor. EtS was found in the following mean concentrations: 54.6mg/L in urine (EtS100 123.1mg/L), 1.8mg/L in blood and 0.9mg/L in vitreous humor. Ethanol was detected in more vitreous humor samples (mean concentration 2.0g/kg) than in blood and urine (mean concentration 1.6g/kg and 2.1g/kg respectively). There was no correlation between the ethanol and the marker concentrations and no statistical conclusions could be drawn between the markers and matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prosody or speech melody subserves linguistic (e.g., question intonation) and emotional functions in speech communication. Findings from lesion studies and imaging experiments suggest that, depending on function or acoustic stimulus structure, prosodic speech components are differentially processed in the right and left hemispheres. This direct current (DC) potential study investigated the linguistic processing of digitally manipulated pitch contours of sentences that carried an emotional or neutral intonation. Discrimination of linguistic prosody was better for neutral stimuli as compared to happily as well as fearfully spoken sentences. Brain activation was increased during the processing of happy sentences as compared to neutral utterances. Neither neutral nor emotional stimuli evoked lateralized processing in the left or right hemisphere, indicating bilateral mechanisms of linguistic processing for pitch direction. Acoustic stimulus analysis suggested that prosodic components related to emotional intonation, such as pitch variability, interfered with linguistic processing of pitch course direction.