28 resultados para Trough
Resumo:
A lack of quantitative high resolution paleoclimate data from the Southern Hemisphere limits the ability to examine current trends within the context of long-term natural climate variability. This study presents a temperature reconstruction for southern Tasmania based on analyses of a sediment core from Duckhole Lake (43.365°S, 146.875°E). The relationship between non-destructive whole core scanning reflectance spectroscopy measurements in the visible spectrum (380–730 nm) and the instrumental temperature record (ad 1911–2000) was used to develop a calibration-in-time reflectance spectroscopy-based temperature model. Results showed that a trough in reflectance from 650 to 700 nm, which represents chlorophyll and its derivatives, was significantly correlated to annual mean temperature. A calibration model was developed (R = 0.56, p auto < 0.05, root mean squared error of prediction (RMSEP) = 0.21°C, five-year filtered data, calibration period 1911–2000) and applied down-core to reconstruct annual mean temperatures in southern Tasmania over the last c. 950 years. This indicated that temperatures were initially cool c. ad 1050, but steadily increased until the late ad 1100s. After a brief cool period in the ad 1200s, temperatures again increased. Temperatures steadily decreased during the ad 1600s and remained relatively stable until the start of the 20th century when they rapidly decreased, before increasing from ad 1960s onwards. Comparisons with high resolution temperature records from western Tasmania, New Zealand and South America revealed some similarities, but also highlighted differences in temperature variability across the mid-latitudes of the Southern Hemisphere. These are likely due to a combination of factors including the spatial variability in climate between and within regions, and differences between records that document seasonal (i.e. warm season/late summer) versus annual temperature variability. This highlights the need for further records from the mid-latitudes of the Southern Hemisphere in order to constrain past natural spatial and seasonal/annual temperature variability in the region, and to accurately identify and attribute changes to natural variability and/or anthropogenic activities.
Resumo:
Most intense cold surges and associated frost events in southern and southeastern Brazil are characterized by a large amplitude trough over South America extending toward tropical latitudes and a ridge to the west of it over the Pacific Ocean. In this study, potential vorticity (PV) streamers serve to examine the flow condition leading to cold surges. Case studies suggest that several PV anomalies are related to cold surge episodes: (1) the potential vorticity unit (2-PVU) isoline upstream of South America becomes progressively more distorted prior and during the cold surge episode, indicating a flow situation which is conducive for Rossby wave breaking and hence a flow which strongly deviates from zonality; (2) the initial stage of a cold surge episode is characterized by a northward bulging of high-PV air to the east of the Andes, resulting in a PV streamer whose northern end reaches Uruguay and southeastern Brazil; the strong PV gradient on its western flank constitutes a flow configuration that induces and maintains the transport of sub-Antarctic air toward the subtropics; (3) a distinct negative PV anomaly, a blocking, originates over the eastern South Pacific, upstream of the South America sector. A composite analysis of 27 cold surges is performed for stratospheric PV streamer frequency on several isentropic surfaces. It reveals that equatorward wave breaking over South America and the western South Atlantic represents an important potential component of the dynamics of intense cold surges. The indications are most pronounced around the isentropic levels of 320 K and immediately before the day with largest temperature drops over subtropical Brazil.
Resumo:
Objectives: To determine HIV-1 RNA in cerebrospinal fluid (CSF) of successfully treated patients and to evaluate if combination antiretroviral treatments with higher central nervous system penetration-effectiveness (CPE) achieve better CSF viral suppression. Methods: Viral loads (VLs) and drug concentrations of lopinavir, atazanavir, and efavirenz were measured in plasma and CSF. The CPE was calculated using 2 different methods. Results: The authors analyzed 87 CSF samples of 60 patients. In 4 CSF samples, HIV-1 RNA was detectable with 43–82 copies per milliliter. Median CPE in patients with detectable CSF VL was significantly lower compared with individuals with undetectable VL: CPE of 1.0 (range, 1.0–1.5) versus 2.3 (range, 1.0–3.5) using the method of 2008 (P = 0.011) and CPE of 6 (range, 6–8) versus 8 (range, 5–12) using the method of 2010 (P = 0.022). The extrapolated CSF trough levels for atazanavir (n = 12) were clearly above the 50% inhibitory concentration (IC50) in only 25% of samples; both patients on atazanavir/ritonavir with detectable CSF HIV-1 RNA had trough levels in the range of the presumed IC50. The extrapolated CSF trough level for lopinavir (n = 42) and efavirenz (n = 18) were above the IC50 in 98% and 78%, respectively, of samples, including the patients with detectable CSF HIV-1 RNA. Conclusions: This study suggests that treatment regimens with high intracerebral efficacy reflected by a high CPE score are essential to achieve CSF HIV-1 RNA suppression. The CPE score including all drug components was a better predictor for treatment failure in the CSF than the sole concentrations of protease inhibitor or nonnucleoside reverse transcriptase inhibitor in plasma or CSF.
Resumo:
OBJECTIVE: The aetiology of Crohn's disease (CD) has been related to nucleotide-binding oligomerisation domain containing 2 (NOD2) and ATG16L1 gene variants. The observation of bacterial DNA translocation in patients with CD led us to hypothesise that this process may be facilitated in patients with NOD2/ATG16L1-variant genotypes, affecting the efficacy of anti-tumour necrosis factor (TNF) therapies. DESIGN: 179 patients with Crohn's disease were included. CD-related NOD2 and ATG16L1 variants were genotyped. Phagocytic and bactericidal activities were evaluated in blood neutrophils. Bacterial DNA, TNFα, IFNγ, IL-12p40, free serum infliximab/adalimumab levels and antidrug antibodies were measured. RESULTS: Bacterial DNA was found in 44% of patients with active disease versus 23% of patients with remitting disease (p=0.01). A NOD2-variant or ATG16L1-variant genotype was associated with bacterial DNA presence (OR 4.8; 95% CI 1.1 to 13.2; p=0.001; and OR 2.4; 95% CI 1.4 to 4.7; p=0.01, respectively). This OR was 12.6 (95% CI 4.2 to 37.8; p=0.001) for patients with a double-variant genotype. Bacterial DNA was associated with disease activity (OR 2.6; 95% CI 1.3 to 5.4; p=0.005). Single and double-gene variants were not associated with disease activity (p=0.19). Patients with a NOD2-variant genotype showed decreased phagocytic and bactericidal activities in blood neutrophils, increased TNFα levels in response to bacterial DNA and decreased trough levels of free anti-TNFα. The proportion of patients on an intensified biological therapy was significantly higher in the NOD2-variant groups. CONCLUSIONS: Our results characterise a subgroup of patients with CD who may require a more aggressive therapy to reduce the extent of inflammation and the risk of relapse
Resumo:
BACKGROUND After heart transplantation (HTx), the interindividual pharmacokinetic variability of immunosuppressive drugs represents a major therapeutic challenge due to the narrow therapeutic window between over-immunosuppression causing toxicity and under-immunosuppression leading to graft rejection. Although genetic polymorphisms have been shown to influence pharmacokinetics of immunosuppressants, data in the context of HTx are scarce. We thus assessed the role of genetic variation in CYP3A4, CYP3A5, POR, NR1I2, and ABCB1 acting jointly in immunosuppressive drug pathways in tacrolimus (TAC) and ciclosporin (CSA) dose requirement in HTx recipients. METHODS Associations between 7 functional genetic variants and blood dose-adjusted trough (C0) concentrations of TAC and CSA at 1, 3, 6, and 12 months after HTx were evaluated in cohorts of 52 and 45 patients, respectively. RESULTS Compared with CYP3A5 nonexpressors (*3/*3 genotype), CYP3A5 expressors (*1/*3 or *1/*1 genotype) required around 2.2- to 2.6-fold higher daily TAC doses to reach the targeted C0 concentration at all studied time points (P ≤ 0.003). Additionally, the POR*28 variant carriers showed higher dose-adjusted TAC-C0 concentrations at all time points resulting in significant differences at 3 (P = 0.025) and 6 months (P = 0.047) after HTx. No significant associations were observed between the genetic variants and the CSA dose requirement. CONCLUSIONS The CYP3A5*3 variant has a major influence on the required TAC dose in HTx recipients, whereas the POR*28 may additionally contribute to the observed variability. These results support the importance of genetic markers in TAC dose optimization after HTx.
Resumo:
PURPOSE This study assessed whether a cycle of "routine" therapeutic drug monitoring (TDM) for imatinib dosage individualization, targeting an imatinib trough plasma concentration (C min) of 1,000 ng/ml (tolerance: 750-1,500 ng/ml), could improve clinical outcomes in chronic myelogenous leukemia (CML) patients, compared with TDM use only in case of problems ("rescue" TDM). METHODS Imatinib concentration monitoring evaluation was a multicenter randomized controlled trial including adult patients in chronic or accelerated phase CML receiving imatinib since less than 5 years. Patients were allocated 1:1 to "routine TDM" or "rescue TDM." The primary endpoint was a combined outcome (failure- and toxicity-free survival with continuation on imatinib) over 1-year follow-up, analyzed in intention-to-treat (ISRCTN31181395). RESULTS Among 56 patients (55 evaluable), 14/27 (52 %) receiving "routine TDM" remained event-free versus 16/28 (57 %) "rescue TDM" controls (P = 0.69). In the "routine TDM" arm, dosage recommendations were correctly adopted in 14 patients (median C min: 895 ng/ml), who had fewer unfavorable events (28 %) than the 13 not receiving the advised dosage (77 %; P = 0.03; median C min: 648 ng/ml). CONCLUSIONS This first target concentration intervention trial could not formally demonstrate a benefit of "routine TDM" because of small patient number and surprisingly limited prescriber's adherence to dosage recommendations. Favorable outcomes were, however, found in patients actually elected for target dosing. This study thus shows first prospective indication for TDM being a useful tool to guide drug dosage and shift decisions. The study design and analysis provide an interesting paradigm for future randomized TDM trials on targeted anticancer agents.
Resumo:
Pituitary surgery remains mainly performed trough a transnasal, transseptal and transsphenoidal way. This surgical approach can damage intranasal structures and, in particular, may impede olfactory function. Our study investigates olfactory function in 67 patients undergoing this type of surgery before and 3 months after surgery. Mean olfactory scores were identical pre- and postoperatively. However, on an individual bases seven percent of the patients showed a clear decrease in olfactory function. In conclusion, transnasal, transseptal and transsphenoidal surgery is relativelv safe with regards to olfactory function
Resumo:
Background Tumor necrosis factor (TNF) inhibition is central to the therapy of inflammatory bowel diseases (IBD). However, loss of response (LOR) is frequent and additional tests to help decision making with costly anti-TNF Therapy are needed. Methods Consecutive IBD Patients receiving anti-TNF therapy (Infliximab (IFX) or Adalimumab (after IFX LOR) from Bern University Hospital were identified and followed prospectively. Patient whole blood was stimulated with a dose-titration of two triggers of TLR receptors human: TNF and LPS. Median fluorescence intensity of CD62L on the surface of granulocytes was quantified by surface staining with specific antibodies (CD33, CD62L) and flow cytometry and logistic curves to these data permits the calculation of EC50 or the half maximal effective concentration TNF concentration to induce shedding [1]. A shift in the concentration were CD62L shedding occurred was seen before and after the anti-TNF agent administraion which permits to predict the response to the drug. This predicted response was correlated to the clinical evolution of the patients in order to analyze the ability of this test to identify LOR to IFX. Results We collected prospective clinical data and blood samples, before and after anti-TNF agent administration, on 33 IBD patients, 25 Crohn's disease and 8 ulcerative colitis patients (45% females) between June 2012 and November 2013. The assay showed a functional blockade of IFX (PFR) for 22 patients (17 CD and 5 UC) whereas 11 (8 CD and 3 UC) had no functional response (NR) to IFX. Clinical characteristics (e.g. diagnosis, disease location, smoking status, BMI and number of infusions) were no significantly different between predicted PFR and NR. Among the 22 Patients with PRF, only 1 patient was a clinical non responder (LOR to IFX), based on clinical prospective evaluation by IBD gastroenterologists (PJ, AM), and among the 11 predicted NR, 3 had no clinical LOR. Sensitivity of this test was 95% and specificity 73% and AUC adjusted for age and gender was 0.81 (Figure 1). During follow up (median 10 mo, 3–15) 8 “hard” outcomes occured (3 medic. flares, 4 resections and 1 new fistula) 2 in the PFR and 6 in the NR group (25% vs. 75%; p < 0.01). Correlation with clinical response is presented in Figure 2. Figure 1. Figure 2. Correlation clinical response - log EC50 changes: 1 No, 2 partial, 3 complete clinical response. Conclusion CD62L (L-Selectin) shedding is the first validated test of functional blockade of TNF alpha in anti-TNF treated IBD patients and will be a useful tool to guide medical decision on the use of anti-TNF agents. Comparative studies with ATI and trough level of IFX are ongoing. 1. Nicola Patuto, Emma Slack, Frank Seibold and Andrew J. Macpherson, (2011), Quantitating Anti-TNF Functionality to Inform Dosing and Choice of Therapy, Gastroenterology, 140 (5, Suppl. I), S689.
Resumo:
The consumption of immunoglobulins (Ig) is increasing due to better recognition of antibody deficiencies, an aging population, and new indications. This review aims to examine the various dosing regimens and research developments in the established and in some of the relevant off-label indications in Europe. The background to the current regulatory settings in Europe is provided as a backdrop for the latest developments in primary and secondary immunodeficiencies and in immunomodulatory indications. In these heterogeneous areas, clinical trials encompassing different routes of administration, varying intervals, and infusion rates are paving the way toward more individualized therapy regimens. In primary antibody deficiencies, adjustments in dosing and intervals will depend on the clinical presentation, effective IgG trough levels and IgG metabolism. Ideally, individual pharmacokinetic profiles in conjunction with the clinical phenotype could lead to highly tailored treatment. In practice, incremental dosage increases are necessary to titrate the optimal dose for more severely ill patients. Higher intravenous doses in these patients also have beneficial immunomodulatory effects beyond mere IgG replacement. Better understanding of the pharmacokinetics of Ig therapy is leading to a move away from simplistic "per kg" dosing. Defective antibody production is common in many secondary immunodeficiencies irrespective of whether the causative factor was lymphoid malignancies (established indications), certain autoimmune disorders, immunosuppressive agents, or biologics. This antibody failure, as shown by test immunization, may be amenable to treatment with replacement Ig therapy. In certain immunomodulatory settings [e.g., idiopathic thrombocytopenic purpura (ITP)], selection of patients for Ig therapy may be enhanced by relevant biomarkers in order to exclude non-responders and thus obtain higher response rates. In this review, the developments in dosing of therapeutic immunoglobulins have been limited to high and some medium priority indications such as ITP, Kawasaki' disease, Guillain-Barré syndrome, chronic inflammatory demyelinating polyradiculoneuropathy, myasthenia gravis, multifocal motor neuropathy, fetal alloimmune thrombocytopenia, fetal hemolytic anemia, and dermatological diseases.
Resumo:
Alpine heavy precipitation events often affect small catchments, although the circulation pattern leading to the event extends over the entire North Atlantic. The various scale interactions involved are particularly challenging for the numerical weather prediction of such events. Unlike previous studies focusing on the southern Alps, here a comprehensive study of a heavy precipitation event in the northern Alps in October 2011 is presented with particular focus on the role of the large-scale circulation in the North Atlantic/European region. During the event exceptionally high amounts of total precipitable water occurred in and north of the Alps. This moisture was initially transported along the flanks of a blocking ridge over the North Atlantic. Subsequently, strong and persistent northerly flow established at the upstream flank of a trough over Europe and steered the moisture towards the northern Alps. Lagrangian diagnostics reveal that a large fraction of the moisture emerged from the West African coast where a subtropical upper-level cut-off low served as an important moisture collector. Wave activity flux diagnostics show that the ridge was initiated as part of a low-frequency, large-scale Rossby wave train while convergence of fast transients helped to amplify it locally in the North Atlantic. A novel diagnostic for advective potential vorticity tendencies sheds more light on this amplification and further emphasizes the role of the ridge in amplifying the trough over Europe. Operational forecasts misrepresented the amplitude and orientation of this trough. For the first time, this study documents an important pathway for northern Alpine flooding, in which the interaction of synoptic-scale to large-scale weather systems and of long-range moisture transport from the Tropics are dominant. Moreover, the trapping of moisture in a subtropical cut-off near the West African coast is found to be a crucial precursor to the observed European high-impact weather.
Resumo:
Trace element behavior during hydrous melting of a metasomatized garnet–peridotite was examined at pressures of 4–6 GPa and temperatures of 1000 °C–1200 °C, conditions appropriate for fluid penetrating the mantle wedge atop the subducting slab. Experiments were performed in a rocking multi-anvil apparatus using a diamond-trap setup. The compositions of the fluid and melt phases were measured using the cryogenic LA-ICP-MS technique. The water-saturated solidus of the K-lherzolite composition is located between 900 °C and 1000 °C at 4 GPa and between 1000 °C and 1100 °C at 5 and 6 GPa. The partition coefficients between fluid or melt and clinopyroxene reveal an asymmetric MREE trough with a minimum at Dy. The clinopyroxene in equilibrium with aqueous fluids is characterized by DUfluid–cpx > DThfluid–cpx while DUmelt–cpx tends to be similar to DThmelt–cpx. The partition coefficients between fluid or melt and garnet reveal very strong light to heavy REE fractionation, DLa/DLu from 95 (hydrous melt) to 1600 (aqueous fluid). The LILE are highly incompatible with partition coefficients > 50. The behavior of HFSE are decoupled, with DZr,Hf close to 1 while DNb,Ta > 10. Garnet is characterized by DUmelt/fluid–garnet < DThmelt/fluid–garnet. A comparison of our experimental partitioning results for trivalent cations as well as the results from the literature and the calculations carried out using the lattice strain model adapted to the presence of water in the bulk system indicates that H2O in the fluid or melt phase has a prominent effect on trace element partitioning. Garnet in mantle rocks in equilibrium with an aqueous fluid is characterized by significantly higher Do(3 +) for REE in the X site of the garnet compared with the partitioning values of the optimal cation in garnet in equilibrium with hydrous melts. Our data show for the first time that the change in the nature of the mobile phase (fluid vs. melt) does affect the affinities of trace elements into the garnet crystal at conditions below the second critical endpoint of the system. The same also applies for clinopyroxene, although this is less clear. Consequently, our new data allow for refinements in predictive modeling of element transfer from the slab to the mantle wedge and of possible compositions of metasomatized mantle that sources OIB magmatism.
Influence of CYP3A5 genetic variation on everolimus maintenance dosing after cardiac transplantation
Resumo:
BACKGROUND Everolimus (ERL) has become an alternative to calcineurin inhibitors (CNIs) due to its renal-sparing properties, especially in heart transplant (HTx) recipients with kidney dysfunction. However, ERL dosing is challenging due to its narrow therapeutic window combined with high inter-individual pharmacokinetic variability. Our aim was to evaluate the effect of clinical and genetic factors on ERL dosing in a pilot cohort of 37 HTx recipients. METHODS Variants in CYP3A5, CYP3A4, CYP2C8, POR, NR1I2, and ABCB1 were genotyped and clinical data were retrieved from patient charts. RESULTS While ERL trough concentration (C0 ) was within the targeted range for most patients, over 30-fold variability in the dose-adjusted ERL C0 was observed. Regression analysis revealed a significant effect of the non-functional CYP3A5*3 variant on the dose-adjusted ERL C0 (P = 0.031). ERL dose requirement was 0.02 mg/kg/day higher in patients with CYP3A5*1/*3 genotype compared to patients with CYP3A5*3/*3 to reach the targeted C0 (P = 0.041). ERL therapy substantially improved estimated glomerular filtration rate (28.6 ± 6.6 ml/min/1.73m(2) ) in patients with baseline kidney dysfunction. CONCLUSION ERL pharmacokinetics in HTx recipients is highly variable. Our preliminary data on patients on a CNI-free therapy regimen suggest that CYP3A5 genetic variation may contribute to this variability. This article is protected by copyright. All rights reserved.
Resumo:
Mutations in the vacuolar–type H+-ATPase B1 subunit gene ATP6V1B1 cause autosomal–recessive distal renal tubular acidosis (dRTA). We previously identified a single-nucleotide polymorphism (SNP) in the human B1 subunit (c.481G.A; p.E161K) that causes greatly diminished pump function in vitro. To investigate the effect of this SNP on urinary acidification, we conducted a genotype-phenotype analysis of recurrent stone formers in theDallas and Bern kidney stone registries. Of 555 patients examined, 32 (5.8%) were heterozygous for the p.E161K SNP, and the remaining 523 (94.2%) carried two wild–type alleles. After adjustment for sex, age, body mass index, and dietary acid and alkali intake, p.E161K SNP carriers had a nonsignificant tendency to higher urinary pH on a random diet (6.31 versus 6.09; P=0.09). Under an instructed low–Ca and low–Na diet, urinary pH was higher in p.E161K SNP carriers (6.56 versus 6.01; P,0.01). Kidney stones of p.E161K carriers were more likely to contain calcium phosphate than stones of wild-type patients. In acute NH4Cl loading, p.E161K carriers displayed a higher trough urinary pH (5.34 versus 4.89; P=0.01) than wild-type patients. Overall, 14.6% of wild-type patients and 52.4% of p.E161K carriers were unable to acidify their urine below pH 5.3 and thus, can be considered to have incomplete dRTA. In summary, our data indicate that recurrent stone formers with the vacuolar H+-ATPase B1 subunit p.E161K SNP exhibit a urinary acidification deficit with an increased prevalence of calcium phosphate– containing kidney stones. The burden of E161K heterozygosity may be a forme fruste of dRTA.