994 resultados para discrimination training
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ∼300ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Resumo:
Detection and discrimination of visuospatial input involve at least extracting, selecting and encoding relevant information and decision-making processes allowing selecting a response. These two operations are altered, respectively, by attentional mechanisms that change discrimination capacities, and by beliefs concerning the likelihood of uncertain events. Information processing is tuned by the attentional level that acts like a filter on perception, while decision-making processes are weighed by subjective probability of risk. In addition, it has been shown that anxiety could affect the detection of unexpected events through the modification of the level of arousal. Consequently, purpose of this study concerns whether and how decision-making and brain dynamics are affected by anxiety. To investigate these questions, the performance of women with either a high (12) or a low (12) STAI-T (State-Trait Anxiety Inventory, Spielberger, 1983) was examined in a decision-making visuospatial task where subjects have to recognize a target visual pattern from non-target patterns. The target pattern was a schematic image of furniture arranged in such a way as to give the impression of a living room. Non-target patterns were created by either the compression or the dilatation of the distances between objects. Target and non-target patterns were always presented in the same configuration. Preliminary behavioral results show no group difference in reaction time. In addition, visuo-spatial abilities were analyzed trough the signal detection theory for quantifying perceptual decisions in the presence of uncertainty (Green and Swets, 1966). This theory treats detection of a stimulus as a decision-making process determined by the nature of the stimulus and cognitive factors. Astonishingly, no difference in d' (corresponding to the distance between means of the distributions) and c (corresponds to the likelihood ratio) indexes was observed. Comparison of Event-related potentials (ERP) reveals that brain dynamics differ according to anxiety. It shows differences in component latencies, particularly a delay in anxious subjects over posterior electrode sites. However, these differences are compensated during later components by shorter latencies in anxious subjects compared to non-anxious one. These inverted effects seem indicate that the absence of difference in reaction time rely on a compensation of attentional level that tunes cortical activation in anxious subjects, but they have to hammer away to maintain performance.
Resumo:
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Resumo:
Past multisensory experiences can influence current unisensory processing and memory performance. Repeated images are better discriminated if initially presented as auditory-visual pairs, rather than only visually. An experience's context thus plays a role in how well repetitions of certain aspects are later recognized. Here, we investigated factors during the initial multisensory experience that are essential for generating improved memory performance. Subjects discriminated repeated versus initial image presentations intermixed within a continuous recognition task. Half of initial presentations were multisensory, and all repetitions were only visual. Experiment 1 examined whether purely episodic multisensory information suffices for enhancing later discrimination performance by pairing visual objects with either tones or vibrations. We could therefore also assess whether effects can be elicited with different sensory pairings. Experiment 2 examined semantic context by manipulating the congruence between auditory and visual object stimuli within blocks of trials. Relative to images only encountered visually, accuracy in discriminating image repetitions was significantly impaired by auditory-visual, yet unaffected by somatosensory-visual multisensory memory traces. By contrast, this accuracy was selectively enhanced for visual stimuli with semantically congruent multisensory pasts and unchanged for those with semantically incongruent multisensory pasts. The collective results reveal opposing effects of purely episodic versus semantic information from auditory-visual multisensory events. Nonetheless, both types of multisensory memory traces are accessible for processing incoming stimuli and indeed result in distinct visual object processing, leading to either impaired or enhanced performance relative to unisensory memory traces. We discuss these results as supporting a model of object-based multisensory interactions.
Resumo:
A review article of the The New England Journal of Medicine refers that almost a century ago, Abraham Flexner, a research scholar at the Carnegie Foundation for the Advancement of Teaching, undertook an assessment of medical education in 155 medical schools in operation in the United States and Canada. Flexner’s report emphasized the nonscientific approach of American medical schools to preparation for the profession, which contrasted with the university-based system of medical education in Germany. At the core of Flexner’s view was the notion that formal analytic reasoning, the kind of thinking integral to the natural sciences, should hold pride of place in the intellectual training of physicians. This idea was pioneered at Harvard University, the University of Michigan, and the University of Pennsylvania in the 1880s, but was most fully expressed in the educational program at Johns Hopkins University, which Flexner regarded as the ideal for medical education. (...)
Resumo:
Intermittent hypoxic exposure with exercise training is based on the assumption that brief exposure to hypoxia is sufficient to induce beneficial muscular adaptations mediated via hypoxia-inducible transcription factors (HIF). We previously demonstrated (Mounier et al. Med Sci Sports Exerc 38:1410-1417, 2006) that leukocytes respond to hypoxia with a marked inter-individual variability in HIF-1alpha mRNA. This study compared the effects of 3 weeks of intermittent hypoxic training on hif gene expression in both skeletal muscle and leukocytes. Male endurance athletes (n = 19) were divided into an Intermittent Hypoxic Exposure group (IHE) and a Normoxic Training group (NT) with each group following a similar 3-week exercise training program. After training, the amount of HIF-1alpha mRNA in muscle decreased only in IHE group (-24.7%, P < 0.05) whereas it remained unchanged in leukocytes in both groups. The levels of vEGF(121) and vEGF(165) mRNA in skeletal muscle increased significantly after training only in the NT group (+82.5%, P < 0.05 for vEGF(121); +41.2%, P < 0.05 for vEGF(165)). In leukocytes, only the IHE group showed a significant change in vEGF(165) (-28.2%, P < 0.05). The significant decrease in HIF-1alpha mRNA in skeletal muscle after hypoxic training suggests that transcriptional and post-transcriptional regulations of the hif-1alpha gene are different in muscle and leukocytes.
Resumo:
This study examined the effects of intermittent hypoxic training (IHT) on skeletal muscle monocarboxylate lactate transporter (MCT) expression and anaerobic performance in trained athletes. Cyclists were assigned to two interventions, either normoxic (N; n = 8; 150 mmHg PIO2) or hypoxic (H; n = 10; ∼3000 m, 100 mmHg PIO2) over a three week training (5×1 h-1h30.week-1) period. Prior to and after training, an incremental exercise test to exhaustion (EXT) was performed in normoxia together with a 2 min time trial (TT). Biopsy samples from the vastus lateralis were analyzed for MCT1 and MCT4 using immuno-blotting techniques. The peak power output (PPO) increased (p<0.05) after training (7.2% and 6.6% for N and H, respectively), but VO2max showed no significant change. The average power output in the TT improved significantly (7.3% and 6.4% for N and H, respectively). No differences were found in MCT1 and MCT4 protein content, before and after the training in either the N or H group. These results indicate there are no additional benefits of IHT when compared to similar normoxic training. Hence, the addition of the hypoxic stimulus on anaerobic performance or MCT expression after a three-week training period is ineffective.
Resumo:
Universities and research institutions have the responsibility to produce science and to provide training to new generations of researchers. In this paper, we propose a model to analyze the determinants of a senior scientist's decisions about allocating time between these tasks. The results of this decision depend upon the characteristics of the research project, the senior scientist's concern for training and the expected innate ability of the junior scientist involved. We analyze the role that a regulator can play in defining both the value of scientific projects and the future population of independent scientists.
Resumo:
The aim of this exploratory study was to assess the impact of clinicians' defense mechanisms-defined as self-protective psychological mechanisms triggered by the affective load of the encounter with the patient-on adherence to a communication skills training (CST). The population consisted of oncology clinicians (N = 31) who participated in a CST. An interview with simulated cancer patients was recorded prior and 6 months after CST. Defenses were measured before and after CST and correlated with a prototype of an ideally conducted interview based on the criteria of CST-teachers. Clinicians who used more adaptive defense mechanisms showed better adherence to communication skills after CST than clinicians with less adaptive defenses (F(1, 29) = 5.26, p = 0.03, d = 0.42). Improvement in communication skills after CST seems to depend on the initial levels of defenses of the clinician prior to CST. Implications for practice and training are discussed. Communication has been recognized as a central element of cancer care [1]. Ineffective communication may contribute to patients' confusion, uncertainty, and increased difficulty in asking questions, expressing feelings, and understanding information [2, 3], and may also contribute to clinicians' lack of job satisfaction and emotional burnout [4]. Therefore, communication skills trainings (CST) for oncology clinicians have been widely developed over the last decade. These trainings should increase the skills of clinicians to respond to the patient's needs, and enhance an adequate encounter with the patient with efficient exchange of information [5]. While CSTs show a great diversity with regard to their pedagogic approaches [6, 7], the main elements of CST consist of (1) role play between participants, (2) analysis of videotaped interviews with simulated patients, and (3) interactive case discussion provided by participants. As recently stated in a consensus paper [8], CSTs need to be taught in small groups (up to 10-12 participants) and have a minimal duration of at least 3 days in order to be effective. Several systematic reviews evaluated the impact of CST on clinicians' communication skills [9-11]. Effectiveness of CST can be assessed by two main approaches: participant-based and patient-based outcomes. Measures can be self-reported, but, according to Gysels et al. [10], behavioral assessment of patient-physician interviews [12] is the most objective and reliable method for measuring change after training. Based on 22 studies on participants' outcomes, Merckaert et al. [9] reported an increase of communication skills and participants' satisfaction with training and changes in attitudes and beliefs. The evaluation of CST remains a challenging task and variables mediating skills improvement remain unidentified. We recently thus conducted a study evaluating the impact of CST on clinicians' defenses by comparing the evolution of defenses of clinicians participating in CST with defenses of a control group without training [13]. Defenses are unconscious psychological processes which protect from anxiety or distress. Therefore, they contribute to the individual's adaptation to stress [14]. Perry refers to the term "defensive functioning" to indicate the degree of adaptation linked to the use of a range of specific defenses by an individual, ranging from low defensive functioning when he or she tends to use generally less adaptive defenses (such as projection, denial, or acting out) to high defensive functioning when he or she tends to use generally more adaptive defenses (such as altruism, intellectualization, or introspection) [15, 16]. Although several authors have addressed the emotional difficulties of oncology clinicians when facing patients and their need to preserve themselves [7, 17, 18], no research has yet been conducted on the defenses of clinicians. For example, repeated use of less adaptive defenses, such as denial, may allow the clinician to avoid or reduce distress, but it also diminishes his ability to respond to the patient's emotions, to identify and to respond adequately to his needs, and to foster the therapeutic alliance. Results of the above-mentioned study [13] showed two groups of clinicians: one with a higher defensive functioning and one with a lower defensive functioning prior to CST. After the training, a difference in defensive functioning between clinicians who participated in CST and clinicians of the control group was only showed for clinicians with a higher defensive functioning. Some clinicians may therefore be more responsive to CST than others. To further address this issue, the present study aimed to evaluate the relationship between the level of adherence to an "ideally conducted interview", as defined by the teachers of the CST, and the level of the clinician' defensive functioning. We hypothesized that, after CST, clinicians with a higher defensive functioning show a greater adherence to the "ideally conducted interview" than clinicians with a lower defensive functioning.
Resumo:
The mental ability to take the perspective of another person may depend on one's own bodily awareness and experience. In the present study, the former was defined as having a history of an eating disorder, and the latter variable was defined as formal experience with dance. The study used a 2 × 2 × 2 factorial design in which reaction times in two mental perspective taking tasks were compared between female dancers and non-dancers with and without a former eating disorder. Participants were asked to imagine two perspectives: i) the position of front-facing and back-facing figures (3rd person perspective taking task) and ii) that these same figures are a self reflection in a mirror (1st person perspective taking task). In both tasks, a particular hand was indicated in the presented figures, and the participants had to decide whether the hand represented their own left or right hand. Overall, responses were slower for front-facing than back-facing figures in the 3rd person perspective taking task, and for back-facing than front-facing figures in the 1st person perspective taking task. Importantly, having a former history of an eating disorder related to a decreased performance in the 3rd person perspective taking task, but only in participants without dance experience. Results from an additional control group (a history of exercise but no dance experience) indicated that dance is particularly beneficial for mental bodily perspective taking. Dance experience, more so than exercise in general, can benefit 3rd person or extrapersonal perspective taking, supporting the favourable impact this exercise has on own body processing