700 resultados para Surrogate
Resumo:
- Background Nilotinib and dasatinib are now being considered as alternative treatments to imatinib as a first-line treatment of chronic myeloid leukaemia (CML). - Objective This technology assessment reviews the available evidence for the clinical effectiveness and cost-effectiveness of dasatinib, nilotinib and standard-dose imatinib for the first-line treatment of Philadelphia chromosome-positive CML. - Data sources Databases [including MEDLINE (Ovid), EMBASE, Current Controlled Trials, ClinicalTrials.gov, the US Food and Drug Administration website and the European Medicines Agency website] were searched from search end date of the last technology appraisal report on this topic in October 2002 to September 2011. - Review methods A systematic review of clinical effectiveness and cost-effectiveness studies; a review of surrogate relationships with survival; a review and critique of manufacturer submissions; and a model-based economic analysis. - Results Two clinical trials (dasatinib vs imatinib and nilotinib vs imatinib) were included in the effectiveness review. Survival was not significantly different for dasatinib or nilotinib compared with imatinib with the 24-month follow-up data available. The rates of complete cytogenetic response (CCyR) and major molecular response (MMR) were higher for patients receiving dasatinib than for those with imatinib for 12 months' follow-up (CCyR 83% vs 72%, p < 0.001; MMR 46% vs 28%, p < 0.0001). The rates of CCyR and MMR were higher for patients receiving nilotinib than for those receiving imatinib for 12 months' follow-up (CCyR 80% vs 65%, p < 0.001; MMR 44% vs 22%, p < 0.0001). An indirect comparison analysis showed no difference between dasatinib and nilotinib for CCyR or MMR rates for 12 months' follow-up (CCyR, odds ratio 1.09, 95% CI 0.61 to 1.92; MMR, odds ratio 1.28, 95% CI 0.77 to 2.16). There is observational association evidence from imatinib studies supporting the use of CCyR and MMR at 12 months as surrogates for overall all-cause survival and progression-free survival in patients with CML in chronic phase. In the cost-effectiveness modelling scenario, analyses were provided to reflect the extensive structural uncertainty and different approaches to estimating OS. First-line dasatinib is predicted to provide very poor value for money compared with first-line imatinib, with deterministic incremental cost-effectiveness ratios (ICERs) of between £256,000 and £450,000 per quality-adjusted life-year (QALY). Conversely, first-line nilotinib provided favourable ICERs at the willingness-to-pay threshold of £20,000-30,000 per QALY. - Limitations Immaturity of empirical trial data relative to life expectancy, forcing either reliance on surrogate relationships or cumulative survival/treatment duration assumptions. - Conclusions From the two trials available, dasatinib and nilotinib have a statistically significant advantage compared with imatinib as measured by MMR or CCyR. Taking into account the treatment pathways for patients with CML, i.e. assuming the use of second-line nilotinib, first-line nilotinib appears to be more cost-effective than first-line imatinib. Dasatinib was not cost-effective if decision thresholds of £20,000 per QALY or £30,000 per QALY were used, compared with imatinib and nilotinib. Uncertainty in the cost-effectiveness analysis would be substantially reduced with better and more UK-specific data on the incidence and cost of stem cell transplantation in patients with chronic CML. - Funding The Health Technology Assessment Programme of the National Institute for Health Research.
Resumo:
Purification of drinking water is routinely achieved by use of conventional coagulants and disinfection procedures. However, there are instances such as flood events when the level of turbidity reaches extreme levels while NOM may be an issue throughout the year. Consequently, there is a need to develop technologies which can effectively treat water of high turbidity during flood events and natural organic matter (NOM) content year round. It was our hypothesis that pebble matrix filtration potentially offered a relatively cheap, simple and reliable means to clarify such challenging water samples. Therefore, a laboratory scale pebble matrix filter (PMF) column was used to evaluate the turbidity and natural organic matter (NOM) pre-treatment performance in relation to 2013 Brisbane River flood water. Since the high turbidity was only a seasonal and short term problem, the general applicability of pebble matrix filters for NOM removal was also investigated. A 1.0 m deep bed of pebbles (the matrix) partly in-filled with either sand or crushed glass was tested, upon which was situated a layer of granular activated carbon (GAC). Turbidity was measured as a surrogate for suspended solids (SS), whereas, total organic carbon (TOC) and UV Absorbance at 254 nm were measured as surrogate parameters for NOM. Experiments using natural flood water showed that without the addition of any chemical coagulants, PMF columns achieved at least 50% turbidity reduction when the source water contained moderate hardness levels. For harder water samples, above 85% turbidity reduction was obtained. The ability to remove 50% turbidity without chemical coagulants may represent significant cost savings to water treatment plants and added environmental benefits accrue due to less sludge formation. A TOC reduction of 35-47% and UV-254 nm reduction of 24-38% was also observed. In addition to turbidity removal during flood periods, the ability to remove NOM using the pebble matrix filter throughout the year may have the benefit of reducing disinfection by-products (DBP) formation potential and coagulant demand at water treatment plants. Final head losses were remarkably low, reaching only 11 cm at a filtration velocity of 0.70 m/h.
Resumo:
Dispersal is a highly important life history trait. In fragmented landscapes the long-term persistence of populations depends on dispersal. Evolution of dispersal is affected by costs and benefits and these may differ between different landscapes. This results in differences in the strength and direction of natural selection on dispersal in fragmented landscapes. Dispersal has been shown to be a nonrandom process that is associated with traits such as flight ability in insects. This thesis examines genetic and physiological traits affecting dispersal in the Glanville fritillary butterfly (Melitaea cinxia). Flight metabolic rate is a repeatable trait representing flight ability. Unlike in many vertebrates, resting metabolic rate cannot be used as a surrogate of maximum metabolic rate as no strong correlation between the two was found in the Glanville fritillary. Resting and flight metabolic rate are affected by environmental variables, most notably temperature. However, only flight metabolic rate has a strong genetic component. Molecular variation in the much-studied candidate locus phosphoglucose isomerase (Pgi), which encodes the glycolytic enzyme PGI, has an effect on carbohydrate metabolism in flight. This effect is temperature dependent: in low to moderate temperatures individuals with the heterozygous genotype at the single nucleotide polymorphism (SNP) AA111 have higher flight metabolic rate than the common homozygous genotype. At high temperatures the situation is reversed. This finding suggests that variation in enzyme properties is indeed translated to organismal performance. High-resolution data on individual female Glanville fritillaries moving freely in the field were recorded using harmonic radar. There was a strong positive correlation between flight metabolic rate and dispersal rate. Flight metabolic rate explained one third of the observed variation in the one-hour movement distance. A fine-scaled analysis of mobility showed that mobility peaked at intermediate ambient temperatures but the two common Pgi genotypes differed in their reaction norms to temperature. As with flight metabolic rate, heterozygotes at SNP AA111 were the most active genotype in low to moderate temperatures. The results show that molecular variation is associated with variation in dispersal rate through the link of flight physiology under the influence of environmental conditions. The evolutionary pressures for dispersal differ between males and females. The effect of flight metabolic rate on dispersal was examined in both sexes in field and laboratory conditions. The relationship between flight metabolic rate and dispersal rate in the field and flight duration in the laboratory were found to differ between the two sexes. In females the relationship was positive, but in males the longest distances and flight durations were recorded for individuals with low flight metabolic rate. These findings may reflect male investment in mate locating. Instead of dispersing, males with high flight metabolic rate may establish territories and follow a perching strategy when locating females and hence move less on the landscape level. Males with low metabolic rate may be forced to disperse due to low competitive success or may show adaptations to an alternative strategy: patrolling. In the light of life history trade-offs and the rate of living theory having high metabolic rate may carry a cost in the form of shortened lifespan. Experiments relating flight metabolic rate to longevity showed a clear correlation in the opposite direction: high flight metabolic rate was associated with long lifespan. This suggests that individuals with high metabolic rate do not pay an extra physiological cost for their high flight capacity, rather there are positive correlations between different measures of fitness. These results highlight the importance of condition.
Resumo:
One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.
Resumo:
Background: The fecal neutrophil-derived proteins calprotectin and lactoferrin have proven useful surrogate markers of intestinal inflammation. The aim of this study was to compare fecal calprotectin and lactoferrin concentrations to clinically, endoscopically, and histologically assessed Crohn’s disease (CD) activity, and to explore the suitability of these proteins as surrogate markers of mucosal healing during anti-TNFα therapy. Furthermore, we studied changes in the number and expression of effector and regulatory T cells in bowel biopsy specimens during anti-TNFα therapy. Patients and methods: Adult CD patients referred for ileocolonoscopy (n=106 for 77 patients) for various reasons were recruited (Study I). Clinical disease activity was assessed with the Crohn’s disease activity index (CDAI) and endoscopic activity with both the Crohn’s disease index of severity (CDEIS) and the simple endoscopic score for Crohn’s disease (SES-CD). Stool samples for measurements of calprotectin and lactoferrin, and blood samples for CRP were collected. For Study II, biopsy specimens were obtained from the ileum and the colon for histologic activity scoring. In prospective Study III, after baseline ileocolonoscopy, 15 patients received induction with anti-TNFα blocking agents and endoscopic, histologic, and fecal-marker responses to therapy were evaluated at 12 weeks. For detecting changes in the number and expression of effector and regulatory T cells, biopsy specimens were taken from the most severely diseased lesions in the ileum and the colon (Study IV). Results: Endoscopic scores correlated significantly with fecal calprotectin and lactoferrin (p<0.001). Both fecal markers were significantly lower in patients with endoscopically inactive than with active disease (p<0.001). In detecting endoscopically active disease, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for calprotectin ≥200 μg/g were 70%, 92%, 94%, and 61%; for lactoferrin ≥10 μg/g they were 66%, 92%, 94%, and 59%. Accordingly, the sensitivity, specificity, PPV, and NPV for CRP >5 mg/l were 48%, 91%, 91%, and 48%. Fecal markers were significantly higher in active colonic (both p<0.001) or ileocolonic (calprotectin p=0.028, lactoferrin p=0.004) than in ileal disease. In ileocolonic or colonic disease, colon histology score correlated significantly with fecal calprotectin (r=0.563) and lactoferrin (r=0.543). In patients receiving anti-TNFα therapy, median fecal calprotectin decreased from 1173 μg/g (range 88-15326) to 130 μg/g (13-1419) and lactoferrin from 105.0 μg/g (4.2-1258.9) to 2.7 μg/g (0.0-228.5), both p=0.001. The relation of ileal IL-17+ cells to CD4+ cells decreased significantly during anti-TNF treatment (p=0.047). The relation of IL-17+ cells to Foxp3+ cells was higher in the patients’ baseline specimens than in their post-treatment specimens (p=0.038). Conclusions: For evaluation of CD activity, based on endoscopic findings, more sensitive surrogate markers than CDAI and CRP were fecal calprotectin and lactoferrin. Fecal calprotectin and lactoferrin were significantly higher in endoscopically active disease than in endoscopic remission. In both ileocolonic and colonic disease, fecal markers correlated closely with histologic disease activity. In CD, these neutrophil-derived proteins thus seem to be useful surrogate markers of endoscopic activity. During anti-TNFα therapy, fecal calprotectin and lactoferrin decreased significantly. The anti-TNFα treatment was also reflected in a decreased IL-17/Foxp3 cell ratio, which may indicate improved balance between effector and regulatory T cells with treatment.
Resumo:
Quantification of pyridoxal-5´-phosphate (PLP) in biological samples is challenging due to the presence of endogenous PLP in matrices used for preparation of calibrators and quality control samples (QCs). Hence, we have developed an LC-MS/MS method for accurate and precise measurement of the concentrations of PLP in samples (20 µL) of human whole blood that addresses this issue by using a surrogate matrix and minimizing the matrix effect. We used a surrogate matrix comprising 2% bovine serum albumin (BSA) in phosphate buffer saline (PBS) for making calibrators, QCs and the concentrations were adjusted to include the endogenous PLP concentrations in the surrogate matrix according to the method of standard addition. PLP was separated from the other components of the sample matrix using protein precipitation with trichloroacetic acid 10% w/v. After centrifugation, supernatant were injected directly into the LC-MS/MS system. Calibration curves were linear and recovery was > 92%. QCs were accurate, precise, stable for four freeze-thaw cycles, and following storage at room temperature for 17h or at -80 °C for 3 months. There was no significant matrix effect using 9 different individual human blood samples. Our novel LC-MS/MS method has satisfied all of the criteria specified in the 2012 EMEA guideline on bioanalytical method validation.
Resumo:
Thirty percent of 70-year-old women have osteoporosis; after age of 80 its prevalence is up to 70%. Postmenopausal women with osteoporosis seem to be at an increased risk for cardiovascular events, and deterioration of oral health, as shown by attachment loss of teeth, which is proportional to the severity of osteoporosis. Osteoporosis can be treated with many different medication, e.g. estrogen and alendronate. We randomized 90 elderly osteoporotic women (65-80 years of age) to receive hormone therapy (HT)(2mg E2+NETA), 10mg alendronate, and their combination for two years and compared their effects on bone mineral density (BMD) and turnover, two surrogate markers of the risk of cardiovascular diseases, C-reactive protein (CRP) and E-selectin, as well as oral health. The effect of HT on health-related quality of life (HRQoL) was studied in the population-based cohort of 1663 postmenopausal women (mean age 68 yr) (585 estrogen users and 1078 non-users). BMD was measured with dual-energy X-ray absorptiometry (DXA) at 0, 12 and 24 months. Urinary N-telopeptide (NTX) of type I collagen, a marker of bone resorption, and serum aminoterminal propeptide of human type I procollagen (PINP), a marker of bone formation, were measured every six months of treatment. Serum CRP and E-selectin, were measured at 0, 6, and 12 months. Dental, and periodontal conditions, and gingival crevicular fluid (GCF) matrix metalloproteinase (MMP)-8 levels were studied to evaluate the oral health status and for the mouth symptoms a structured questionnaire was used. The HRQoL was measured with 15D questionnaire. Lumbar spine BMD increased similarly in all treatment groups (6.8-8.4% and 9.1-11.2%). Only HT increased femoral neck BMD at both 12 (4.9%) and 24 months (5.8%), at the latter time point the HT group differed significantly from the other groups. HT reduced bone marker levels of NTX and PINP significantly less than other two groups.Oral HT significantly increased serum CRP level by 76.5% at 6 and by 47.1% (NS) at 12 months, and decreased serum E-selectin level by 24.3% and 30.0%. Alendronate had no effect on these surrogate markers. Alendronate caused a decrease in the resting salivary flow rate and tended to increase GCF MMP-8 levels. Otherwise, there was no effect on the parameters of oral health. HT improved the HRQoL of elderly women significantly on the dimensions of usual activities, vitality and sexual activity, but the overall improvement in HRQoL was neither statistically significant nor clinically important. In conclusion, bisphosphonates might be the first option to start the treatment of postmenopausal osteoporosis in the old age.
Resumo:
Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.
Resumo:
Primary biliary cirrhosis (PBC) is caused by an autoimmune inflammation of the small bile ducts. It results to destruction of bile ducts, accumulation of the bile in the liver, and cirrhosis. The prevalence and incidence of PBC is increasing in the Western world. The prevalence is highest in the USA (402 per million) and incidence in Scotland (49/million/year). Our aim was to assess the epidemiology of PBC in Finland. Patients for the epidemiological study were searched from the hospital discharge records from year 1988 to 1999.The prevalence rose from 103 to 180/million from 1988 to 1999, an annual increase of 5.1%. The incidence rose from 12 to 17 /million/year, an annual increase of 3.5%. The age at death increased markedly from 65 to 76 years. The risk of liver related deaths diminished over time. The treatment of PBC is based on Ursodeoxycholic acid (UDCA). During 20 years 50% of patients end up with cirrhosis. Our treatment option was to combine budesonide, a potent corticosteroid with a high first pass metabolism in the liver, to UDCA and evaluate the liver effects and systemic effects such as bone mass density (BMD) changes. Our aim was to find out if combination of laboratory tests would serve as a surrogate marker for PBC and help reducing the need for liver biopsy. Non-cirrhotic PBC patients were randomized to receive budesonide 6 mg/day combined to UDCA 15 mg /kg/day or UDCA alone for three years. The combination therapy with UDCA and budesonide was effective: stage improved 22%, fibrosis 25%, and inflammation 32%. In the UDCA group the changes were: 20% deterioriation in stage and 70% in fibrosis, but a 10% improvement in inflammation. BMD in femoral neck decreased by 3.6% in the combination group and by 1.9% in the UDCA group. The reductions in lumbar spine were 2.8% and 0.7%. Pharmacokinetics did not differ between the stages of PBC. HA, PIIINP, bile acids, and AST were significantly different within stages I-III and could differentiate the mild fibrosis (F0F1) from the moderate (F2F3). The combination of these individual markers (PBC-score) further improved the accuracy. The area under the ROC of the PBC score, using a cut of value 66, had a sensitivity of 81.4% and a specificity of 65.2% to classify the stage of PBC. The prevalence of PBC in Finland increases, which results from increasing incidence and improved survival. The combination of budesonide and UDCA improves liver histology compared to UDCA alone in non-cirrhotic stages of PBC. The treatment may reduce BMD. Hyaluronic acid, PIIINP, AST, and bile acids may serve as tools to monitor the treatment response in the early stages of PBC. The budesonide and UDCA combination therapy is an option for those patients who do not receive full response from UDCA and are still at the non-cirrhotic stage of PBC.
Resumo:
The aim of the present thesis was to study the role of the epithelial sodium channel (ENaC) in clearance of fetal lung fluid in the newborn infant by measurement of airway epithelial expression of ENaC, of nasal transepithelial potential difference (N-PD), and of lung compliance (LC). In addition, the effect of postnatal dexamethasone on airway epithelial ENaC expression was measured in preterm infants with bronchopulmonary dysplasia (BPD). The patient population was formed of selected term newborn infants born in the Department of Obstetrics (Studies II-IV) and selected preterm newborn infants treated in the neonatal intensive care unit of the Hospital for Children and Adolescents (Studies I and IV) of the Helsinki University Central Hospital in Finland. A small population of preterm infants suffering from BPD was included in Study I. Studies I, III, and IV included airway epithelial measurement of ENaC and in Studies II and III, measurement of N-PD and LC. In Study I, ENaC expression analyses were performed in the Research Institute of the Hospital for Sick Children in Toronto, Ontario, Canada. In the following studies, analyses were performed in the Scientific Laboratory of the Hospital for Children and Adolescents. N-PD and LC measurements were performed at bedside in these hospitals. In term newborn infants, the percentage of amiloride-sensitive N-PD, a surrogate for ENaC activity, measured during the first 4 postnatal hours correlates positively with LC measured 1 to 2 days postnatally. Preterm infants with BPD had, after a therapeutic dose of dexamethasone, higher airway epithelial ENaC expression than before treatment. These patients were subsequently weaned from mechanical ventilation, probably as a result of the clearance of extra fluid from the alveolar spaces. In addition, we found that in preterm infants ENaC expression increases with gestational age (GA). In preterm infants, ENaC expression in the airway epithelium was lower than in term newborn infants. During the early postnatal period in those born both preterm and term airway epithelial βENaC expression decreased significantly. Term newborn infants delivered vaginally had a significantly smaller airway epithelial expression of αENaC after the first postnatal day than did those delivered by cesarean section. The functional studies showed no difference in N-PD between infants delivered vaginally and by cesarean section. We therefore conclude that the low airway epithelial expression of ENaC in the preterm infant and the correlation of N-PD with LC in the term infant indicate a role for ENaC in the pathogenesis of perinatal pulmonary adaptation and neonatal respiratory distress. Because dexamethasone raised ENaC expression in preterm infants with BPD, and infants were subsequently weaned from ventilator therapy, we suggest that studies on the treatment of respiratory distress in the preterm infant should include the induction of ENaC activity.
Resumo:
BACKGROUND: Obesity is closely associated with insulin resistance, which is a pathophysiologic condition contributing to the important co-morbidities of obesity, such as the metabolic syndrome and type 2 diabetes mellitus. In obese subjects, adipose tissue is characterized by inflammation (macrophage infiltration, increased expression insulin resistance genes and decreased expression of insulin sensitivity genes). Increased liver fat, without excessive alcohol consumption, is defined as non-alcoholic fatty liver disease (NAFLD) and also associated with obesity and insulin resistance. It is unknown whether and how insulin resistance is associated with altered expression of adipocytokines (adipose tissue-derived signaling molecules), and whether adipose tissue inflammation and NAFLD coexist independent of obesity. Genetic factors could explain variation in liver fat independent of obesity but the heritability of NAFLD is unknown. AIMS: To determine whether acute regulation of adipocytokine expression by insulin in adipose tissue is altered in obesity. To investigate the relationship between adipose tissue inflammation and liver fat content independent of obesity. To assess the heritability of serum alanine aminotransferase (ALT) activity, a surrogate marker of liver fat. METHODS: 55 healthy normal-weight and obese volunteers were recruited. Subcutaneous adipose tissue biopsies were obtained for measurement of gene expression before and during 6 hours of euglycemic hyperinsulinemia. Liver fat content was measured by proton magnetic resonance spectroscopy, and adipose tissue inflammation was assessed by gene expression, immunohistochemistry and lipidomics analysis. Genetic factors contributing to serum ALT activity were determined in 313 twins by statistical heritability modeling. RESULTS: During insulin infusion the expression of insulin sensitivity genes remains unchanged, while the expression of insulin resistance genes increases in obese/insulin-resistant subjects compared to insulin-sensitive subjects. Adipose tissue inflammation is associated with liver fat content independent of obesity. Adipose tissue of subjects with high liver fat content is characterized infiltrated macrophages and increased expression of inflammatory genes, as well as by increased concentrations of ceramides compared to equally obese subjects with normal liver fat. A significant heritability for serum ALT activity was verified. CONCLUSIONS: Effects of insulin infusion on adipose tissue gene expression in obese/insulin-resistant subjects are not only characterized by hyporesponse of insulin sensitivity genes but also by hyperresponse of insulin resistance and inflammatory genes. This suggests that in obesity, the impaired insulin action contributes or self-perpetuates alterations in adipocytokine expression in adipose tissue. Adipose tissue inflammation is increased in subjects with high liver fat compared to equally obese subjects with normal liver fat content. Concentrations of ceramides, the putative mediators of insulin resistance, are increased in adipose tissue in subjects with high liver fat. Genetic factors contribute significantly to variation in serum ALT activity, a surrogate marker of liver fat. These data imply that adipose tissue inflammation and increased liver fat content are closely interrelated, and determine insulin resistance even independent of obesity.
Resumo:
Head and neck squamous cell carcinoma (HNSCC) is the sixth most common cancer worldwide. Well-known risk factors include tobacco smoking and alcohol consumption. Overall survival has improved, but is still low especially in developing countries. One reason for this is the often advanced stage of the disease at the time of diagnosis, but also lack of reliable prognostic tools to enable individualized patient treatment to improve outcome. To date, the TNM classification still serves as the best disease evaluation criterion, although it does not take into account the molecular basis of the tumor. The need for surrogate molecular markers for more accurate disease prediction has increased research interests in this field. We investigated the prevalence, physical status, and viral load of human papillomavirus (HPV) in HNSCC to determine the impact of HPV on head and neck carcinogenesis. The prevalence and genotyping of HPV were assessed with an SPF10 PCR microtiter plate-based hybridization assay (DEIA), followed by a line probe-based genotyping assay. More than half of the patients had HPV DNA in their tumor specimens. Oncogenic HPV-16 was the most common type, and coinfections with other oncogenic and benign associated types also existed. HPV-16 viral load was unevenly distributed among different tumor sites; the tonsils harbored significantly greater amounts of virus than other sites. Episomal location of HPV-16 was associated with large tumors, and both integrated and mixed forms of viral DNA were detected. In this series, we could not show that the presence of HPV DNA correlated with survival. In addition, we investigated the prevalence and genotype of HPV in laryngeal carcinoma patients in a prospective Nordic multicenter study based on fresh-frozen laryngeal tumor samples to determine whether the tumors were HPV-associated. These patients were also examined and interviewed at diagnosis for known risk factors, such as tobacco smoking and alcohol consumption, and for several other habituations to elucidate their effects on patient survival. HPV analysis was performed with the same protocols as in the first study. Only 4% of the specimens harbored HPV DNA. Heavy drinking was associated with poor survival. Heavy drinking patients were also younger than nonheavy drinkers and had a more advanced stage of disease at diagnosis. Heavy drinkers had worse oral hygiene than nonheavy drinkers; however, poor oral hygiene did not have prognostic significance. History of chronic laryngitis, gastroesophageal reflux disease, and orogenital sex contacts were rare in this series. To clarify why vocal cord carcinomas seldom metastasize, we determined tumor lymph vessel (LVD) and blood vessel (BVD) densities in HNSCC patients. We used a novel lymphatic vessel endothelial marker (LYVE-1 antibody) to locate the lymphatic vessels in HNSCC samples and CD31 to detect the blood microvessels. We found carcinomas of the vocal cords to harbor less lymphatic and blood microvessels than carcinomas arising from sites other than vocal cords. The lymphatic and blood microvessel densities did not correlate with tumor size. High BVD was strongly correlated with high LVD. Neither BVD nor LVD showed any association with survival in our series. The immune system plays an important role in tumorigenesis, as neoplastic cells have to escape the cytotoxic lymphocytes in order to survive. Several candidate HLA class II alleles have been reported to be prognostic in cervical carcinomas, an epithelial malignancy resembling HNSCC. These alleles may have an impact on head and neck carcinomas as well. We determined HLA-DRB1* and -DQB1* alleles in HNSCC patients. Healthy organ donors served as controls. The Inno-LiPA reverse dot-blot kit was used to identify alleles in patient samples. No single haplotype was found to be predictive of either the risk for head and neck cancer, or the clinical course of the disease. However, alleles observed to be prognostic in cervical carcinomas showed a similar tendency in our series. DRB1*03 was associated with node-negative disease at diagnosis. DRB1*08 and DRB1*13 were associated with early-stage disease; DRB1*04 had a lower risk for tumor relapse; and DQB1*03 and DQB1*0502 were more frequent in controls than in patients. However, these associations reached only borderline significance in our HNSCC patients.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Background. Kidney transplantation (KTX) is considered to be the best treatment of terminal uremia. Despite improvements in short-term graft survival, a considerable number of kidney allografts are lost due to the premature death of patients with a functional kidney and to chronic allograft nephropathy (CAN). Aim. To investigate the risk factors involved in the progression of CAN and to analyze diagnostic methods for this entity. Materials and methods. Altogether, 153 implant and 364 protocol biopsies obtained between June 1996 and April 2008 were analyzed. The biopsies were classified according to Banff ’97 and chronic allograft damage index (CADI). Immunohistochemistry for TGF-β1 was performed in 49 biopsies. Kidney function was evaluated by creatinine and/or cystatin C measurement and by various estimates of glomerular filtration rate (GFR). Demographic data of the donors and recipients were recorded after 2 years’ follow-up. Results. Most of the 3-month biopsies (73%) were nearly normal. The mean CADI score in the 6-month biopsies decreased significantly after 2001. Diastolic hypertension correlated with ΔCADI. Serum creatinine concentration at hospital discharge and glomerulosclerosis were risk factors for ΔCADI. High total and LDL cholesterol, low HDL and hypertension correlated with chronic histological changes. The mean age of the donors increased from 41 -52 years. Older donors were more often women who had died from an underlying disease. The prevalence of delayed graft function increased over the years, while acute rejections (AR) decreased significantly over the years. Sub-clinical AR was observed in 4% and it did not affect long-term allograft function or CADI. Recipients´ drug treatment was modified along the Studies, being mycophenolate mophetil, tacrolimus, statins and blockers of the renine-angiotensin-system more frequently prescribed after 2001. Patients with a higher ΔCADI had lower GFR during follow-up. CADI over 2 was best predicted by creatinine, although with modest sensitivity and specificity. Neither cystatin C nor other estimates of GFR were superior to creatinine for CADI prediction. Cyclosporine A toxicity was seldom seen. Low cyclosporin A concentration after 2 h correlated with TGF- β1 expression in interstitial inflammatory cells, and this predicted worse graft function. Conclusions. The progression of CAN has been affected by two major factors: the donors’ characteristics and the recipients’ hypertension. The increased prevalence of DGF might be a consequence of the acceptance of older donors who had died from an underlying disease. Implant biopsies proved to be of prognostic value, and they are essential for comparison with subsequent biopsies. The progression of histological damage was associated with hypertension and dyslipidemia. The augmented expression of TGF-β1 in inflammatory cells is unclear, but it may be related to low immunosuppression. Serum creatinine is the most suitable tool for monitoring kidney allograft function on every-day basis. However, protocol biopsies at 6 and 12 months predicted late kidney allograft dysfunction and affected the clinical management of the patients. Protocol biopsies are thus a suitable surrogate to be used in clinical trials and for monitoring kidney allografts.
Resumo:
Background. Hyperlipidemia is a common concern in patients with heterozygous familial hypercholesterolemia (HeFH) and in cardiac transplant recipients. In both groups, an elevated serum LDL cholesterol level accelerates the development of atherosclerotic vascular disease and increases the rates of cardiovascular morbidity and mortality. The purpose of this study is to assess the pharmacokinetics, efficacy, and safety of cholesterol-lowering pravastatin in children with HeFH and in pediatric cardiac transplant recipients receiving immunosuppressive medication. Patients and Methods. The pharmacokinetics of pravastatin was studied in 20 HeFH children and in 19 pediatric cardiac transplant recipients receiving triple immunosuppression. The patients ingested a single 10-mg dose of pravastatin, and plasma pravastatin concentrations were measured up to 10/24 hours. The efficacy and safety of pravastatin (maximum dose 10 to 60 mg/day and 10 mg/day) up to one to two years were studied in 30 patients with HeFH and in 19 cardiac transplant recipients, respectively. In a subgroup of 16 HeFH children, serum non-cholesterol sterol ratios (102 x mmol/mol of cholesterol), surrogate estimates of cholesterol absorption (cholestanol, campesterol, sitosterol), and synthesis (desmosterol and lathosterol) were studied at study baseline (on plant stanol esters) and during combination with pravastatin and plant stanol esters. In the transplant recipients, the lipoprotein levels and their mass compositions were analyzed before and after one year of pravastatin use, and then compared to values measured from 21 healthy pediatric controls. The transplant recipients were grouped into patients with transplant coronary artery disease (TxCAD) and patients without TxCAD, based on annual angiography evaluations before pravastatin. Results. In the cardiac transplant recipients, the mean area under the plasma concentration-time curve of pravastatin [AUC(0-10)], 264.1 * 192.4 ng.h/mL, was nearly ten-fold higher than in the HeFH children (26.6 * 17.0 ng.h/mL). By 2, 4, 6, 12 and 24 months of treatment, the LDL cholesterol levels in the HeFH children had respectively decreased by 25%, 26%, 29%, 33%, and 32%. In the HeFH group, pravastatin treatment increased the markers of cholesterol absorption and decreased those of synthesis. High ratios of cholestanol to cholesterol were associated with the poor cholesterol-lowering efficacy of pravastatin. In cardiac transplant recipients, pravastatin 10 mg/day lowered the LDL cholesterol by approximately 19%. Compared with the patients without TxCAD, patients with TxCAD had significantly lower HDL cholesterol concentrations and higher apoB-100/apoA-I ratios at baseline (1.0 ± 0.3 mmol/L vs. 1.4 ± 0.3 mmol/L, P = 0.031; and 0.7 ± 0.2 vs. 0.5 ± 0.1, P = 0.034) and after one year of pravastatin use (1.0 ± 0.3 mmol/L vs. 1.4 ± 0.3 mmol/L, P = 0.013; and 0.6 ± 0.2 vs. 0.4 ± 0.1, P = 0.005). Compared with healthy controls, the transplant recipients exhibited elevated serum triglycerides at baseline (median 1.3 [range 0.6-3.2] mmol/L vs. 0.7 [0.3-2.4] mmol/L, P=0.0002), which negatively correlated with their HDL cholesterol concentration (r = -0.523, P = 0.022). Recipients also exhibited higher apoB-100/apoA1 ratios (0.6 ± 0.2 vs. 0.4 ± 0.1, P = 0.005). In addition, elevated triglyceride levels were still observed after one year of pravastatin use (1.3 [0.5-3.5] mmol/L vs. 0.7 [0.3-2.4] mmol/L, P = 0.0004). Clinically significant elevations in alanine aminotransferase, creatine kinase, or creatinine ocurred in neither group. Conclusions. Immunosuppressive medication considerably increased the plasma pravastatin concentrations. In both patient groups, pravastatin treatment was moderately effective, safe, and well tolerated. In the HeFH group, high baseline cholesterol absorption seemed to predispose patients to insufficient cholesterol-lowering efficacy of pravastatin. In the cardiac transplant recipients, low HDL cholesterol and a high apoB-100/apoA-I ratio were associated with development of TxCAD. Even though pravastatin in the transplant recipients effectively lowered serum total and LDL cholesterol concentrations, it failed to normalize their elevated triglyceride levels and, in some patients, to prevent the progression of TxCAD.