108 resultados para OCL (Object Constraint Language)
Resumo:
AIMS: To investigate empirically the hypothesized relationship between counsellor motivational interviewing (MI) skills and patient change talk (CT) by analysing the articulation between counsellor behaviours and patient language during brief motivational interventions (BMI) addressing at-risk alcohol consumption. DESIGN: Sequential analysis of psycholinguistic codes obtained by two independent raters using the Motivational Interviewing Skill Code (MISC), version 2.0. SETTING: Secondary analysis of data from a randomized controlled trial evaluating the effectiveness of BMI in an emergency department. PARTICIPANTS: A total of 97 patients tape-recorded when receiving BMI. MEASUREMENTS: MISC variables were categorized into three counsellor behaviours (MI-consistent, MI-inconsistent and 'other') and three kinds of patient language (CT, counter-CT (CCT) and utterances not linked with the alcohol topic). Observed transition frequencies, conditional probabilities and significance levels based on odds ratios were computed using sequential analysis software. FINDINGS: MI-consistent behaviours were the only counsellor behaviours that were significantly more likely to be followed by patient CT. Those behaviours were significantly more likely to be followed by patient change exploration (CT and CCT) while MI-inconsistent behaviours and 'other' counsellor behaviours were significantly more likely to be followed by utterances not linked with the alcohol topic and significantly less likely to be followed by CT. MI-consistent behaviours were more likely after change exploration, whereas 'other' counsellor behaviours were more likely only after utterances not linked with the alcohol topic. CONCLUSIONS: Findings lend support to the hypothesized relationship between MI-consistent behaviours and CT, highlight the importance of patient influence on counsellor behaviour and emphasize the usefulness of MI techniques and spirit during brief interventions targeting change enhancement.
Resumo:
The production of object and action words can be dissociated in aphasics, yet their anatomical correlates have been difficult to distinguish in functional imaging studies. To investigate the extent to which the cortical neural networks underlying object- and action-naming processing overlap, we performed electrostimulation mapping (ESM), which is a neurosurgical mapping technique routinely used to examine language function during brain-tumor resections. Forty-one right-handed patients who had surgery for a brain tumor were asked to perform overt naming of object and action pictures under stimulation. Overall, 73 out of the 633 stimulated cortical sites (11.5%) were associated with stimulation-induced language interferences. These interference sites were very much localized (<1 cm(2) ), and showed substantial variability across individuals in their exact localization. Stimulation interfered with both object and action naming over 44 sites, whereas it specifically interfered with object naming over 19 sites and with action naming over 10 sites. Specific object-naming sites were mainly identified in Broca's area (Brodmann area 44/45) and the temporal cortex, whereas action-naming specific sites were mainly identified in the posterior midfrontal gyrus (Brodmann area 6/9) and Broca's area (P = 0.003 by the Fisher's exact test). The anatomical loci we emphasized are in line with a cortical distinction between objects and actions based on conceptual/semantic features, so the prefrontal/premotor cortex would preferentially support sensorimotor contingencies associated with actions, whereas the temporal cortex would preferentially underpin (functional) properties of objects. Hum Brain Mapp 35:429-443, 2014. © 2012 Wiley Periodicals, Inc.
Resumo:
Language is typically a function of the left hemisphere but the right hemisphere is also essential in some healthy individuals and patients. This inter-subject variability necessitates the localization of language function, at the individual level, prior to neurosurgical intervention. Such assessments are typically made by comparing left and right hemisphere language function to determine "language lateralization" using clinical tests or fMRI. Here, we show that language function needs to be assessed at the region and hemisphere specific level, because laterality measures can be misleading. Using fMRI data from 82 healthy participants, we investigated the degree to which activation for a semantic word matching task was lateralized in 50 different brain regions and across the entire cortex. This revealed two novel findings. First, the degree to which language is lateralized across brain regions and between subjects was primarily driven by differences in right hemisphere activation rather than differences in left hemisphere activation. Second, we found that healthy subjects who have relatively high left lateralization in the angular gyrus also have relatively low left lateralization in the ventral precentral gyrus. These findings illustrate spatial heterogeneity in language lateralization that is lost when global laterality measures are considered. It is likely that the complex spatial variability we observed in healthy controls is more exaggerated in patients with brain damage. We therefore highlight the importance of investigating within hemisphere regional variations in fMRI activation, prior to neuro-surgical intervention, to determine how each hemisphere and each region contributes to language processing. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.
Resumo:
The genetic characterization of unbalanced mixed stains remains an important area where improvement is imperative. In fact, with current methods for DNA analysis (Polymerase Chain Reaction with the SGM Plus™ multiplex kit), it is generally not possible to obtain a conventional autosomal DNA profile of the minor contributor if the ratio between the two contributors in a mixture is smaller than 1:10. This is a consequence of the fact that the major contributor's profile 'masks' that of the minor contributor. Besides known remedies to this problem, such as Y-STR analysis, a new compound genetic marker that consists of a Deletion/Insertion Polymorphism (DIP), linked to a Short Tandem Repeat (STR) polymorphism, has recently been developed and proposed elsewhere in literature [1]. The present paper reports on the derivation of an approach for the probabilistic evaluation of DIP-STR profiling results obtained from unbalanced DNA mixtures. The procedure is based on object-oriented Bayesian networks (OOBNs) and uses the likelihood ratio as an expression of the probative value. OOBNs are retained in this paper because they allow one to provide a clear description of the genotypic configuration observed for the mixed stain as well as for the various potential contributors (e.g., victim and suspect). These models also allow one to depict the assumed relevance relationships and perform the necessary probabilistic computations.
Resumo:
La masse considérable de travaux publiés dans le domaine de la neuroimagerie fonctionnelle concernant les fonctions ou modalités du langage (compréhension et expression de la parole, lecture) ou les différents processus linguistiques qui les sous-tendent (sémantique, phonologie, syntaxe) permet de dégager de grandes tendances en termes de substrats anatomiques. Si les « fondamentaux » issus des origines aphasiologiques du domaine n'ont pas été bouleversés, certaines spécificités non explorées par l'approche lésionnelle sont identifiables. Les méta-analyses, en regroupant les résultats de la littérature, nous procurent aujourd'hui une vision globale des substrats cérébraux du langage. Cependant la variabilité inter-individuelle reste importante en raison de multiples facteurs dont certains sont mal identifiés ; cartographier exhaustivement les fonctions du langage à l'échelle individuelle reste une gageure. La quête des images du langage est sans doute aussi inachevable que celle de l'étude du langage lui-même.
Resumo:
Introduction: Neuroimaging of the self focused on high-level mechanisms such as language, memory or imagery of the self. Recent evidence suggests that low-level mechanisms of multisensory and sensorimotor integration may play a fundamental role in encoding self-location and the first-person perspective (Blanke and Metzinger, 2009). Neurological patients with out-of body experiences (OBE) suffer from abnormal self-location and the first-person perspective due to a damage in the temporo-parietal junction (Blanke et al., 2004). Although self-location and the first-person perspective can be studied experimentally (Lenggenhager et al., 2009), the neural underpinnings of self-location have yet to be investigated. To investigate the brain network involved in self-location and first-person perspective we used visuo-tactile multisensory conflict, magnetic resonance (MR)-compatible robotics, and fMRI in study 1, and lesion analysis in a sample of 9 patients with OBE due to focal brain damage in study 2. Methods: Twenty-two participants saw a video showing either a person's back or an empty room being stroked (visual stimuli) while the MR-compatible robotic device stroked their back (tactile stimulation). Direction and speed of the seen stroking could either correspond (synchronous) or not (asynchronous) to those of the seen stroking. Each run comprised the four conditions according to a 2x2 factorial design with Object (Body, No-Body) and Synchrony (Synchronous, Asynchronous) as main factors. Self-location was estimated using the mental ball dropping (MBD; Lenggenhager et al., 2009). After the fMRI session participants completed a 6-item adapted from the original questionnaire created by Botvinick and Cohen (1998) and based on questions and data obtained by Lenggenhager et al. (2007, 2009). They were also asked to complete a questionnaire to disclose the perspective they adopted during the illusion. Response times (RTs) for the MBD and fMRI data were analyzed with a 3-way mixed model ANOVA with the in-between factor Perspective (up, down) and the two with-in factors Object (body, no-body) and Stroking (synchronous, asynchronous). Quantitative lesion analysis was performed using MRIcron (Rorden et al., 2007). We compared the distributions of brain lesions confirmed by multimodality imaging (Knowlton, 2004) in patients with OBE with those showing complex visual hallucinations involving people or faces, but without any disturbance of self-location and first person perspective. Nine patients with OBE were investigated. The control group comprised 8 patients. Structural imaging data were available for normalization and co-registration in all the patients. Normalization of each patient's lesion into the common MNI (Montreal Neurological Institute) reference space permitted simple, voxel-wise, algebraic comparisons to be made. Results: Even if in the scanner all participants were lying on their back and were facing upwards, analysis of perspective showed that half of the participants had the impression to be looking down at the virtual human body below them, despite any cues about their body position (Down-group). The other participants had the impression to be looking up at the virtual body above them (Up-group). Analysis of Q3 ("How strong was the feeling that the body you saw was you?") indicated stronger self-identification with the virtual body during the synchronous stroking. RTs in the MBD task confirmed these subjective data (significant 3-way interaction between perspective, object and stroking). fMRI results showed eight cortical regions where the BOLD signal was significantly different during at least one of the conditions resulting from the combination of Object and Stroking, relative to baseline: right and left temporo-parietal junction, right EBA, left middle occipito-temporal gyrus, left postcentral gyrus, right medial parietal lobe, bilateral medial occipital lobe (Fig 1). The activation patterns in right and left temporo-parietal junction and right EBA reflected changes in self-location and perspective as revealed by statistical analysis that was performed on the percentage of BOLD change with respect to the baseline. Statistical lesion overlap comparison (using nonparametric voxel based lesion symptom mapping) with respect to the control group revealed the right temporo-parietal junction, centered at the angular gyrus (Talairach coordinates x = 54, y =-52, z = 26; p>0.05, FDR corrected). Conclusions: The present questionnaire and behavioural results show that - despite the noisy and constraining MR environment) our participants had predictable changes in self-location, self-identification, and first-person perspective when robotic tactile stroking was applied synchronously with the robotic visual stroking. fMRI data in healthy participants and lesion data in patients with abnormal self-location and first-person perspective jointly revealed that the temporo-parietal cortex especially in the right hemisphere encodes these conscious experiences. We argue that temporo-parietal activity reflects the experience of the conscious "I" as embodied and localized within bodily space.
From technicians to classics: on the rationalization of the Russian language in the USSR (1917-1953)
Resumo:
The vision-for-action literature favours the idea that the motor output of an action - whether manual or oculomotor - leads to similar results regarding object handling. Findings on line bisection performance challenge this idea: healthy individuals bisect lines manually to the left of centre, and to the right of centre when using eye fixation. In case that these opposite biases for manual and oculomotor action reflect more universal compensatory mechanisms that cancel each other out to enhance overall accuracy, one would like to observe comparable opposite biases for other material. In the present study, we report on three independent experiments in which we tested line bisection (by hand, by eye fixation) not only for solid lines, but also for letter lines; the latter, when bisected manually, is known to result in a rightward bias. Accordingly, we expected a leftward bias for letter lines when bisected via eye fixation. Analysis of bisection biases provided evidence for this idea: manual bisection was more rightward for letter as compared to solid lines, while bisection by eye fixation was more leftward for letter as compared to solid lines. Support for the eye fixation observation was particularly obvious in two of the three studies, for which comparability between eye and hand action was increasingly adjusted (paper-pencil versus touch screen for manual action). These findings question the assumption that ocular motor and manual output are always inter-changeable, but rather suggest that at least for some situations ocular motor and manual output biases are orthogonal to each other, possibly balancing each other out.