975 resultados para Delayed Response Task
Resumo:
Following the idea that response inhibition processes play a central role in concealing information, the present study investigated the influence of a Go/No-go task as an interfering mental activity, performed parallel to the Concealed Information Test (CIT), on the detectability of concealed information. 40 undergraduate students participated in a mock-crime experiment and simultaneously performed a CIT and a Go/No-go task. Electrodermal activity (EDA), respiration line length (RLL), heart rate (HR) and finger pulse waveform length (FPWL) were registered. Reaction times were recorded as behavioral measures in the Go/No-go task as well as in the CIT. As a within-subject control condition, the CIT was also applied without an additional task. The parallel task did not influence the mean differences of the physiological measures of the mock-crime-related probe and the irrelevant items. This finding might possibly be due to the fact that the applied parallel task induced a tonic rather than a phasic mental activity, which did not influence differential responding to CIT items. No physiological evidence for an interaction between the parallel task and sub-processes of deception (e.g. inhibition) was found. Subjects' performance in the Go/No-go parallel task did not contribute to the detection of concealed information. Generalizability needs further investigations of different variations of the parallel task.
Resumo:
BACKGROUND: T cells play a key role in delayed-type drug hypersensitivity reactions. Their reactivity can be assessed by their proliferation in response to the drug in the lymphocyte transformation test (LTT). However, the LTT imposes limitations in terms of practicability, and an alternative method that is easier to implement than the LTT would be desirable. METHODS: Four months to 12 years after acute drug hypersensitivity reactions, CD69 upregulation on T cells of 15 patients and five healthy controls was analyzed by flow cytometry. RESULTS: All 15 LTT-positive patients showed a significant increase of CD69 expression on T cells after 48 h of drug-stimulation exclusively with the drugs incriminated in drug-hypersensitivities. A stimulation index of 2 as cut-off value allowed discrimination between nonreactive and reactive T cells in LTT and CD69 upregulation. T cells (0.5-3%) showed CD69 up-regulation. The reactive cell population consisted of a minority of truly drug reactive T cells secreting cytokines and a higher number of bystander T cells activated by IL-2 and possibly other cytokines. CONCLUSIONS: CD69 upregulation was observed after 2 days in all patients with a positive LTT after 6 days, thus appearing to be a promising tool to identify drug-reactive T cells in the peripheral blood of patients with drug-hypersensitivity reactions.
Resumo:
INTRODUCTION: Angiogenesis is known to be a critical and closely regulated step during bone formation and fracture healing driven by a complex interaction of various cytokines. Delays in bone healing or even nonunion might therefore be associated with altered concentrations of specific angiogenic factors. These alterations might in turn be reflected by changes in serum concentrations. METHOD: To determine physiological time courses of angiogenic cytokines during fracture healing as well as possible changes associated with failed consolidation, we prospectively collected serum samples from patients who had sustained surgical treatment for a long bone fracture. Fifteen patients without fracture healing 4 months after surgery (nonunion group) were matched to a collective of 15 patients with successful healing (union group). Serum concentrations of angiogenin (ANG), angiopoietin 2 (Ang-2), basic fibroblast growth factor (bFGF), platelet derived growth factor AB (PDGF-AB), pleiotrophin (PTN) and vascular endothelial growth factor (VEGF) were measured using enzyme linked immunosorbent assays over a period of 24 weeks. RESULTS: Compared to reference values of healthy uninjured controls serum concentrations of VEGF, bFGF and PDGF were increased in both groups. Peak concentrations of these cytokines were reached during early fracture healing. Serum concentrations of bFGF and PDGF-AB were significantly higher in the union group at 2 and 4 weeks after the injury when compared to the nonunion group. Serum concentrations of ANG and Ang-2 declined steadily from the first measurement in normal healing fractures, while no significant changes over time could be detected for serum concentrations of these factures in nonunion patients. PTN serum levels increased asymptotically over the entire investigation in timely fracture healing while no such increase could be detected during delayed healing. CONCLUSION: We conclude that fracture healing in human subjects is accompanied by distinct changes in systemic levels of specific angiogenic factors. Significant alterations of these physiologic changes in patients developing a fracture nonunion over time could be detected as early as 2 (bFGF) and 4 weeks (PDGF-AB) after initial trauma surgery.
Resumo:
BACKGROUND: The most prevalent drug hypersensitivity reactions are T-cell mediated. The only established in vitro test for detecting T-cell sensitization to drugs is the lymphocyte transformation test, which is of limited practicability. To find an alternative in vitro method to detect drug-sensitized T cells, we screened the in vitro secretion of 17 cytokines/chemokines by peripheral blood mononuclear cells (PBMC) of patients with well-documented drug allergies, in order to identify the most promising cytokines/chemokines for detection of T-cell sensitization to drugs. METHODS: Peripheral blood mononuclear cell of 10 patients, five allergic to beta-lactams and five to sulfanilamides, and of five healthy controls were incubated for 3 days with the drug antigen. Cytokine concentrations were measured in the supernatants using commercially available 17-plex bead-based immunoassay kits. RESULTS: Among the 17 cytokines/chemokines analysed, interleukin-2 (IL-2), IL-5, IL-13 and interferon-gamma (IFN-gamma) secretion in response to the drugs were significantly increased in patients when compared with healthy controls. No difference in cytokine secretion patterns between sulfonamide- and beta-lactam-reactive PBMC could be observed. The secretion of other cytokines/chemokines showed a high variability among patients. CONCLUSION: The measurement of IL-2, IL-5, IL-13 or IFN-gamma or a combination thereof might be a useful in vitro tool for detection of T-cell sensitization to drugs. Secretion of these cytokines seems independent of the type of drug antigen and the phenotype of the drug reaction. A study including a higher number of patients and controls will be needed to determine the exact sensitivity and specificity of this test.
Resumo:
BACKGROUND: Only responding patients benefit from preoperative therapy for locally advanced esophageal carcinoma. Early detection of non-responders may avoid futile treatment and delayed surgery. PATIENTS AND METHODS: In a multi-center phase ll trial, patients with resectable, locally advanced esophageal carcinoma were treated with 2 cycles of induction chemotherapy followed by chemoradiotherapy (CRT) and surgery. Positron emission tomography with 2[fluorine-18]fluoro-2-deoxy-d-glucose (FDG-PET) was performed at baseline and after induction chemotherapy. The metabolic response was correlated with tumor regression grade (TRG). A decrease in FDG tumor uptake of less than 40% was prospectively hypothesized as a predictor for histopathological non-response (TRG > 2) after CRT. RESULTS: 45 patients were included. The median decrease in FDG tumor uptake after chemotherapy correlated well with TRG after completion of CRT (p = 0.021). For an individual patient, less than 40% decrease in FDG tumor uptake after induction chemotherapy predicted histopathological non-response after completion of CRT, with a sensitivity of 68% and a specificity of 52% (positive predictive value 58%, negative predictive value 63%). CONCLUSIONS: Metabolic response correlated with histopathology after preoperative therapy. However, FDG-PET did not predict non-response after induction chemotherapy with sufficient clinical accuracy to justify withdrawal of subsequent CRT and selection of patients to proceed directly to surgery.
Resumo:
Hypertension is a known risk factor for cardiovascular disease. Hypertensive individuals show exaggerated norepinephrine (NE) reactivity to stress. Norepinephrine is a known lipolytic factor. It is unclear if, in hypertensive individuals, stress-induced increases in NE are linked with the elevations in stress-induced circulating lipid levels. Such a mechanism could have implications for atherosclerotic plaque formation. In a cross-sectional, quasi-experimentally controlled study, 22 hypertensive and 23 normotensive men (mean +/- SEM, 45 +/- 3 years) underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We measured plasma NE and the plasma lipid profile (total cholesterol [TC], low-density-lipoprotein cholesterol [LDL-C], high-density-lipoprotein cholesterol, and triglycerides) immediately before and after stress and at 20 and 60 minutes of recovery. All lipid levels were corrected for stress hemoconcentration. Compared with normotensives, hypertensives had greater TC (P = .030) and LDL-C (P = .037) stress responses. Independent of each other, mean arterial pressure (MAP) upon screening and immediate increase in NE predicted immediate stress change in TC (MAP: beta = .41, P = .003; NE: beta = .35, P = .010) and LDL-C (MAP: beta = .32, P = .024; NE: beta = .38, P = .008). Mean arterial pressure alone predicted triglycerides stress change (beta = .32, P = .043) independent of NE stress change, age, and BMI. The MAP-by-NE interaction independently predicted immediate stress change of high-density-lipoprotein cholesterol (beta = -.58, P < .001) and of LDL-C (beta = -.25, P < .08). We conclude that MAP and NE stress reactivity may elicit proatherogenic changes of plasma lipids in response to acute psychosocial stress, providing one mechanism by which stress might increase cardiovascular risk in hypertension.
Resumo:
OBJECTIVES We sought to analyze the time course of atrial fibrillation (AF) episodes before and after circular plus linear left atrial ablation and the percentage of patients with complete freedom from AF after ablation by using serial seven-day electrocardiograms (ECGs). BACKGROUND The curative treatment of AF targets the pathophysiological corner stones of AF (i.e., the initiating triggers and/or the perpetuation of AF). The pathophysiological complexity of both may not result in an "all-or-nothing" response but may modify number and duration of AF episodes. METHODS In patients with highly symptomatic AF, circular plus linear ablation lesions were placed around the left and right pulmonary veins, between the two circles, and from the left circle to the mitral annulus using the electroanatomic mapping system. Repetitive continuous 7-day ECGs administered before and after catheter ablation were used for rhythm follow-up. RESULTS In 100 patients with paroxysmal (n = 80) and persistent (n = 20) AF, relative duration of time spent in AF significantly decreased over time (35 +/- 37% before ablation, 26 +/- 41% directly after ablation, and 10 +/- 22% after 12 months). Freedom from AF stepwise increased in patients with paroxysmal AF and after 12 months measured at 88% or 74% depending on whether 24-h ECG or 7-day ECG was used. Complete pulmonary vein isolation was demonstrated in <20% of the circular lesions. CONCLUSIONS The results obtained in patients with AF treated with circular plus linear left atrial lesions strongly indicate that substrate modification is the main underlying pathophysiologic mechanism and that it results in a delayed cure instead of an immediate cure.
Resumo:
When switching tasks, if stimuli are presented that contain features that cue two of the tasks in the set (i.e., bivalent stimuli), performance slowing is observed on all tasks. This generalized slowing extends to tasks in the set which have no features in common with the bivalent stimulus and is referred to as the bivalency effect. In previous work, the bivalency effect was invoked by presenting occasionally occurring bivalent stimuli; therefore, the possibility that the generalized slowing is simply due to surprise (as opposed to bivalency) has not yet been discounted. This question was addressed in two task switching experiments where the occasionally occurring stimuli were either bivalent (bivalent version) or merely surprising (surprising version). The results confirmed that the generalized slowing was much greater in the bivalent version of both experiments, demonstrating that the magnitude of this effect is greater than can be accounted for by simple surprise. This set of results confirms that slowing task execution when encountering bivalent stimuli may be fundamental for efficient task switching, as adaptive tuning of response style may serve to prepare the cognitive system for possible future high conflict trials.
Resumo:
Implicit task sequence learning (TSL) can be considered as an extension of implicit sequence learning which is typically tested with the classical serial reaction time task (SRTT). By design, in the SRTT there is a correlation between the sequence of stimuli to which participants must attend and the sequence of motor movements/key presses with which participants must respond. The TSL paradigm allows to disentangle this correlation and to separately manipulate the presences/absence of a sequence of tasks, a sequence of responses, and even other streams of information such as stimulus locations or stimulus-response mappings. Here I review the state of TSL research which seems to point at the critical role of the presence of correlated streams of information in implicit sequence learning. On a more general level, I propose that beyond correlated streams of information, a simple statistical learning mechanism may also be involved in implicit sequence learning, and that the relative contribution of these two explanations differ according to task requirements. With this differentiation, conflicting results can be integrated into a coherent framework.
Resumo:
BACKGROUND Pathology studies have shown delayed arterial healing in culprit lesions of patients with acute coronary syndrome (ACS) compared with stable coronary artery disease (CAD) after placement of drug-eluting stents (DES). It is unknown whether similar differences exist in-vivo during long-term follow-up. Using optical coherence tomography (OCT), we assessed differences in arterial healing between patients with ACS and stable CAD five years after DES implantation. METHODS AND RESULTS A total of 88 patients comprised of 53 ACS lesions with 7864 struts and 35 stable lesions with 5298 struts were suitable for final OCT analysis five years after DES implantation. The analytical approach was based on a hierarchical Bayesian random-effects model. OCT endpoints were strut coverage, malapposition, protrusion, evaginations and cluster formation. Uncovered (1.7% vs. 0.7%, adjusted p=0.041) or protruding struts (0.50% vs. 0.13%, adjusted p=0.038) were more frequent among ACS compared with stable CAD lesions. A similar trend was observed for malapposed struts (1.33% vs. 0.45%, adj. p=0.072). Clusters of uncovered or malapposed/protruding struts were present in 34.0% of ACS and 14.1% of stable patients (adj. p=0.041). Coronary evaginations were more frequent in patients with ST-elevation myocardial infarction compared with stable CAD patients (0.16 vs. 0.13 per cross section, p=0.027). CONCLUSION Uncovered, malapposed, and protruding stent struts as well as clusters of delayed healing may be more frequent in culprit lesions of ACS compared with stable CAD patients late after DES implantation. Our observational findings suggest a differential healing response attributable to lesion characteristics of patients with ACS compared with stable CAD in-vivo.
Resumo:
Sub-fossil Cladocera were studied in a core from Gerzensee (Swiss Plateau) for the late-glacial periods of Oldest Dryas, Bølling, and Allerød. Cladocera assemblages were dominated by cold-tolerant littoral taxa Chydorus sphaericus, Acroperus harpae, Alonella nana, Alona affinis, and Alonella excisa. The rapid warming at the beginning of the Bølling (GI-1e) ca. 14,650 yr before present (BP: before AD 1950) was indicated by an abrupt 2‰ shift in carbonate δ18O and a clear change in pollen assemblages. Cladocera assemblages, in contrast, changed more gradually. C. sphaericus and A. harpae are the most cold-tolerant, and their abundance was highest in the earliest part of the record. Only 150–200 years after the beginning of the Bølling warming we observed an increase in less cold-tolerant A. excisa and A. affinis. The establishment of Alona guttata, A. guttata var. tuberculata, and Pleuroxus unicatus was delayed by ca. 350, 770, and 800 years respectively after the onset of the Bølling. The development of the Cladocera assemblages suggests increasing water temperatures during the Bølling/Allerød, which agrees with the interpretation by von Grafenstein et al. (2013-this issue) that decreasing δ18O values in carbonates in this period reflect increasing summer water temperatures at the sediment–water interface. Other processes also affected the Cladocera community, including the development and diversification of aquatic vegetation favourable for Cladocera. The record is clearly dominated by Chydoridae, as expected for a littoral core. Yet, the planktonic Eubosmina-group occurred throughout the core, with the exception of a period at ca. 13,760–13,420 yr BP. Lake levels reconstructed for this period are relatively low, indicating that the littoral location might have become too shallow for Eubosmina in that period.
Resumo:
This study investigates neural language organization in very preterm born children compared to control children and examines the relationship between language organization, age, and language performance. Fifty-six preterms and 38 controls (7–12 y) completed a functional magnetic resonance imaging language task. Lateralization and signal change were computed for language-relevant brain regions. Younger preterms showed a bilateral language network whereas older preterms revealed left-sided language organization. No age-related differences in language organization were observed in controls. Results indicate that preterms maintain atypical bilateral language organization longer than term born controls. This might reflect a delay of neural language organization due to very premature birth.
Resumo:
Animal work implicates the brain-derived neurotrophic factor (BDNF) in function of the ventral striatum (VS), a region known for its role in processing valenced feedback. Recent evidence in humans shows that BDNF Val66Met polymorphism modulates VS activity in anticipation of monetary feedback. However, it remains unclear whether the polymorphism impacts the processing of self-attributed feedback differently from feedback attributed to an external agent. In this study, we emphasize the importance of the feedback attribution because agency is central to computational accounts of the striatum and cognitive accounts of valence processing. We used functional magnetic resonance imaging and a task, in which financial gains/losses are either attributable to performance (self-attributed, SA) or chance (externally-attributed, EA) to ask whether BDNF Val66Met polymorphism predicts VS activity. We found that BDNF Val66Met polymorphism influenced how feedback valence and agency information were combined in the VS and in the right inferior frontal junction (IFJ). Specifically, Met carriers' VS response to valenced feedback depended on agency information, while Val/Val carriers' VS response did not. This context-specific modulation of valence effectively amplified VS responses to SA losses in Met carriers. The IFJ response to SA losses also differentiated Val/Val from Met carriers. These results may point to a reduced allocation of attention and altered motivational salience to SA losses in Val/Val compared to Met carriers. Implications for major depressive disorder are discussed.
Resumo:
Flavanoid-rich dark chocolate consumption benefits cardiovascular health, but underlying mechanisms are elusive. We investigated the acute effect of dark chocolate on the reactivity of prothrombotic measures to psychosocial stress. Healthy men aged 20-50 years (mean ± SD: 35.7 ± 8.8) were assigned to a single serving of either 50 g of flavonoid-rich dark chocolate (n=31) or 50 g of optically identical flavonoid-free placebo chocolate (n=34). Two hours after chocolate consumption, both groups underwent an acute standardised psychosocial stress task combining public speaking and mental arithmetic. We determined plasma levels of four stress-responsive prothrombotic measures (i. e., fibrinogen, clotting factor VIII activity, von Willebrand Factor antigen, fibrin D-dimer) prior to chocolate consumption, immediately before and after stress, and at 10 minutes and 20 minutes after stress cessation. We also measured the flavonoid epicatechin, and the catecholamines epinephrine and norepinephrine in plasma. The dark chocolate group showed a significantly attenuated stress reactivity of the hypercoagulability marker D-dimer (F=3.87, p=0.017) relative to the placebo chocolate group. Moreover, the blunted D-dimer stress reactivity related to higher plasma levels of the flavonoid epicatechin assessed before stress (F=3.32, p = 0.031) but not to stress-induced changes in catecholamines (p's=0.35). There were no significant group differences in the other coagulation measures (p's≥0.87). Adjustments for covariates did not alter these findings. In conclusion, our findings indicate that a single consumption of flavonoid-rich dark chocolate blunted the acute prothrombotic response to psychosocial stress, thereby perhaps mitigating the risk of acute coronary syndromes triggered by emotional stress.
Resumo:
The evaluation for European Union market approval of coronary stents falls under the Medical Device Directive that was adopted in 1993. Specific requirements for the assessment of coronary stents are laid out in supplementary advisory documents. In response to a call by the European Commission to make recommendations for a revision of the advisory document on the evaluation of coronary stents (Appendix 1 of MEDDEV 2.7.1), the European Society of Cardiology (ESC) and the European Association of Percutaneous Cardiovascular Interventions (EAPCI) established a Task Force to develop an expert advisory report. As basis for its report, the ESC-EAPCI Task Force reviewed existing processes, established a comprehensive list of all coronary drug-eluting stents that have received a CE mark to date, and undertook a systematic review of the literature of all published randomized clinical trials evaluating clinical and angiographic outcomes of coronary artery stents between 2002 and 2013. Based on these data, the TF provided recommendations to inform a new regulatory process for coronary stents. The main recommendations of the task force include implementation of a standardized non-clinical assessment of stents and a novel clinical evaluation pathway for market approval. The two-stage clinical evaluation plan includes recommendation for an initial pre-market trial with objective performance criteria (OPC) benchmarking using invasive imaging follow-up leading to conditional CE-mark approval and a subsequent mandatory, large-scale randomized trial with clinical endpoint evaluation leading to unconditional CE-mark. The data analysis from the systematic review of the Task Force may provide a basis for determination of OPC for use in future studies. This paper represents an executive summary of the Task Force's report.