351 resultados para wavelength monitoring
Resumo:
Introduction: Imatinib trough plasma concentrations (Cmin) have been correlated with treatment response in chronic myeloid leukemia (CML) patients. The use of Cmin monitoring for optimizing imatinib dosage (therapeutic drug monitoring [TDM]) is therefore proposed for patients with unsatisfying response or tolerance ("rescue TDM"). A cycle of "routine TDM" for dosage individualization could also be beneficial to prevent unfavorable events, yet its clinical usefulness has not been evaluated. We aimed to assess prospectively whether a "routine TDM" intervention targeting imatinib Cmin of 1000 ng/mL (tolerance, 750-1500 ng/mL) could improve efficacy, tolerance, and persistence on treatment compared with "rescue TDM" use only. Patients (or Materials) and Methods: The Swiss Imatinib COncentration Monitoring Evaluation (I-COME) study was a multicenter randomized controlled trial (ISRCTN31181395). Adult patients in chronic or accelerated phase CML receiving imatinib ≤5 years were eligible. Patients were randomly (1:1) allocated to receive "routine TDM" intervention or to serve as controls with access only to "rescue TDM". All had 1-year follow-up. The primary endpoint was a combined efficacy-safety outcome (failure- and toxicity-free survival without imatinib discontinuation), analyzed in intention-to-treat. Results: Among 56 CML recruited patients, 55 had their molecular and cytogenetic response measured. 14/27 of patients receiving "routine TDM" (52% [33%-71%]) remained event-free versus 16/28 of control patients with "rescue TDM" only (57% [39%-75%]; P=0.69). In the "routine TDM" group, dosage recommendations were adopted entirely in 50% of patients (median Cmin at study end, 895 ng/mL; CV = 33%). These patients had fewer unfavorable events (28% [5%-52%]) compared with patients not receiving the advised dosage (77% [54%-99%]; P = 0.03; median Cmin at study end, 648 ng/mL; CV = 38%). Conclusion: This first prospective target concentration intervention trial could not formally demonstrate a benefit of "routine TDM" of imatinib, especially due to a small patient number and limited prescriber's adherence to dosage recommendations. Nevertheless, the patients receiving the advised dosage more often met target concentrations and the combined outcome (efficacy, tolerance, and persistence). A cycle of routine TDM could thus be favorable, at least in patients eligible for dosage adjustment. Its usefulness should, however, be further confirmed in larger trials.
Resumo:
Given that clay-rich landslides may become mobilized, leading to rapid mass movements (earthflows and debris flows), they pose critical problems in risk management worldwide. The most widely proposed mechanism leading to such flow-like movements is the increase in water pore pressure in the sliding mass, generating partial or complete liquefaction. This solid-to-liquid transition results in a dramatic reduction of mechanical rigidity in the liquefied zones, which could be detected by monitoring shear wave velocity variations. With this purpose in mind, the ambient seismic noise correlation technique has been applied to measure the variation in the seismic surface wave velocity in the Pont Bourquin landslide (Swiss Alps). This small but active composite earthslide-earthflow was equipped with continuously recording seismic sensors during spring and summer 2010. An earthslide of a few thousand cubic meters was triggered in mid-August 2010, after a rainy period. This article shows that the seismic velocity of the sliding material, measured from daily noise correlograms, decreased continuously and rapidly for several days prior to the catastrophic event. From a spectral analysis of the velocity decrease, it was possible to determine the location of the change at the base of the sliding layer. These results demonstrate that ambient seismic noise can be used to detect rigidity variations before failure and could potentially be used to predict landslides.
Resumo:
INTRODUCTION: Continuous EEG (cEEG) is increasingly used to monitor brain function in neuro-ICU patients. However, its value in patients with coma after cardiac arrest (CA), particularly in the setting of therapeutic hypothermia (TH), is only beginning to be elucidated. The aim of this study was to examine whether cEEG performed during TH may predict outcome. METHODS: From April 2009 to April 2010, we prospectively studied 34 consecutive comatose patients treated with TH after CA who were monitored with cEEG, initiated during hypothermia and maintained after rewarming. EEG background reactivity to painful stimulation was tested. We analyzed the association between cEEG findings and neurologic outcome, assessed at 2 months with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). RESULTS: Continuous EEG recording was started 12 ± 6 hours after CA and lasted 30 ± 11 hours. Nonreactive cEEG background (12 of 15 (75%) among nonsurvivors versus none of 19 (0) survivors; P < 0.001) and prolonged discontinuous "burst-suppression" activity (11 of 15 (73%) versus none of 19; P < 0.001) were significantly associated with mortality. EEG seizures with absent background reactivity also differed significantly (seven of 15 (47%) versus none of 12 (0); P = 0.001). In patients with nonreactive background or seizures/epileptiform discharges on cEEG, no improvement was seen after TH. Nonreactive cEEG background during TH had a positive predictive value of 100% (95% confidence interval (CI), 74 to 100%) and a false-positive rate of 0 (95% CI, 0 to 18%) for mortality. All survivors had cEEG background reactivity, and the majority of them (14 (74%) of 19) had a favorable outcome (CPC 1 or 2). CONCLUSIONS: Continuous EEG monitoring showing a nonreactive or discontinuous background during TH is strongly associated with unfavorable outcome in patients with coma after CA. These data warrant larger studies to confirm the value of continuous EEG monitoring in predicting prognosis after CA and TH.
Resumo:
Drug development has improved over recent decades, with refinements in analytical techniques, population pharmacokinetic-pharmacodynamic (PK-PD) modelling and simulation, and new biomarkers of efficacy and tolerability. Yet this progress has not yielded improvements in individualization of treatment and monitoring, owing to various obstacles: monitoring is complex and demanding, many monitoring procedures have been instituted without critical assessment of the underlying evidence and rationale, controlled clinical trials are sparse, monitoring procedures are poorly validated and both drug manufacturers and regulatory authorities take insufficient account of the importance of monitoring. Drug concentration and effect data should be increasingly collected, analyzed, aggregated and disseminated in forms suitable for prescribers, along with efficient monitoring tools and evidence-based recommendations regarding their best use. PK-PD observations should be collected for both novel and established critical drugs and applied to observational data, in order to establish whether monitoring would be suitable. Methods for aggregating PK-PD data in systematic reviews should be devised. Observational and intervention studies to evaluate monitoring procedures are needed. Miniaturized monitoring tests for delivery at the point of care should be developed and harnessed to closed-loop regulated drug delivery systems. Intelligent devices would enable unprecedented precision in the application of critical treatments, i.e. those with life-saving efficacy, narrow therapeutic margins and high interpatient variability. Pharmaceutical companies, regulatory agencies and academic clinical pharmacologists share the responsibility of leading such developments, in order to ensure that patients obtain the greatest benefit and suffer the least harm from their medicines.
Resumo:
Background: Recent data have suggested that a population of CD4+ CD25high T cells, phenotypically characterized by the expression of CD45RO and CD127, is significantly expanded in stable liver and kidney transplant recipients and represents alloreactive T cells. Induction therapies may have an impact on this alloreactive T cell population. In this study, we prospectively analyzed CD4+ CD25high CD45RO+ CD127high T cells after induction with either thymoglobulin or basiliximab. Patients and methods: A total of twenty-seven kidney transplant recipients were prospectively enrolled; 14 received thymoglobulin induction followed by a 4-day course of steroids with tacrolimus and mycophenolate mofetil («thymo group»), and 13 received basiliximab induction followed by standard triple immunosuppression (tacrolimus, mycophenolate mofetil and prednisone) («BSX group»). Phenotypical analysis by flow cytometry of the expression of CD25, CD45RO and CD127 on peripheral CD4+ T cells was performed at 0, 3 and 6 months after transplantation. Twenty-four healthy subjects (HS) were studied as controls. Results: There were no differences in baseline characteristics between the groups; at 6 months, patient survival (100%), graft survival (100%), serum creatinine (thymo group versus BSX group: 129 versus 125 micromol/l) and acute rejection (2/14 versus 2/13) were not significantly different. Thymo induction produced a prolonged CD4 T cell depletion. As compared to pre-transplantation values, an expansion of the alloreactive T cell population was observed at 3 months in both thymo (mean: from 6.38% to 14.72%) and BSX (mean: from 8.01% to 18.42%) groups. At 6 months, the alloreactive T cell population remained significantly expanded in the thymo group (16.92 ± 2.87%) whereas it tended to decrease in the BSX group (10.22 ± 1.38%). Conclusion: Overall, our results indicate that the expansion of alloreactive T cells occurs rapidly after transplantation in patients receiving either thymo or BSX induction. Whether differences at later timepoints or whether different IS regimens may modify this alloreactive population remains to be studied.
Resumo:
From data collected during routine TDM, plasma concentrations of citalopram (CIT) and its metabolites demethylcitalopram (DCIT) and didemethylcitalopram (DDCIT) were measured in 345 plasma samples collected in steady-state conditions. They were from 258 patients treated with usual doses (20-60 mg/d) and from patients medicated with 80-360 mg/d CIT. Most patients had one or several comedications, including other antidepressants, antipsychotics, lithium, anticonvulsants, psychostimulants and somatic medications. Dose-corrected CIT plasma concentrations (C/D ratio) were 2.51 +/- 2.25 ng mL-1 mg-1 (n = 258; mean +/- SD). Patients >65 years had significantly higher dose-corrected CIT plasma concentrations (n = 56; 3.08 +/- 1.35 ng mL-1 mg-1) than younger patients (n = 195; 2.35 +/- 2.46 ng mL-1 mg-1) (P = 0.03). CIT plasma concentrations in the generally recommended dose range were [mean +/- SD, (median)]: 57 +/- 64 (45) ng/mL (10-20 mg/d; n = 64), 117 +/- 95 (91) ng/mL (21-60 mg/d; n = 96). At higher than usual doses, the following concentrations of CIT were measured: 61-120 mg/d CIT, 211 +/- 103 (190) ng/mL (n = 93); 121-200 mg/d: 339 +/- 143 (322) ng/mL (n = 70); 201-280 mg/d: 700 +/- 408 (565) ng/mL (n = 18); 281-360 mg/d: 888 +/- 620 (616) ng/mL (n = 4). When only one sample per patient (at the highest daily dose if repeated dosages) is considered, there is a linear and significant correlation (n = 48, r = 0.730; P < 0.001) between daily dose (10-200 mg/d) and CIT plasma concentrations. In experiments with dogs, DDCIT was reported to affect the QT interval when present at concentrations >300 ng/mL. In this study, DDCIT concentration reached 100 ng/mL in a patient treated with 280 mg/d CIT. Twelve other patients treated with 140-320 mg/d CIT had plasma concentrations of DDCIT within the range 52-73 ng/mL. In a subgroup comprised of patients treated with > or =160 mg/d CIT and with CIT plasma concentrations < or =300 ng/mL, and patients treated with < or =200 mg/d CIT and CIT plasma concentrations > or = 600 ng/mL, the enantiomers of CIT and DCIT were also analyzed. The highest S-CIT concentration measured in this subgroup was 327 ng/mL in a patient treated with 140 mg/d CIT, but the highest S-CIT concentration (632 ng/mL) was measured in patient treated with 360 mg/d CIT. In conclusion, there is a highly linear correlation between CIT plasma concentrations and CIT doses, well above the usual dose range.
Resumo:
Numerous phase I and II clinical trials testing the safety and immunogenicity of various peptide vaccine formulations based on CTL-defined tumor antigens in cancer patients have been reported during the last 7 years. While specific T-cell responses can be detected in a variable fraction of immunized patients, an even smaller but significant fraction of these patients have objective tumor responses. Efficient therapeutic vaccination should aim at boosting naturally occurring antitumor T- and B-cell responses and at sustaining a large number of tumor antigen specific and fully functional effector T cells at tumor sites. Recent progress in our ability to quantitatively and qualitatively monitor tumor antigen specific CD8 T-cell responses will greatly help in making rapid progress in this field.