910 resultados para after Schrader and Gersonde (1978)
Resumo:
This paper investigates the short-run effects of economic growth on carbon dioxide emissions from the combustion of fossil fuels and the manufacture of cement for 189 countries over the period 1961-2010. Contrary to what has previously been reported, we conclude that there is no strong evidence that the emissions-income elasticity is larger during individual years of economic expansion as compared to recession. Significant evidence of asymmetry emerges when effects over longer periods are considered. We find that economic growth tends to increase emissions not only in the same year, but also in subsequent years. Delayed effects - especially noticeable in the road transport sector - mean that emissions tend to grow more quickly after booms and more slowly after recessions. Emissions are more sensitive to fluctuations in industrial value added than agricultural value added, with services being an intermediate case. On the expenditure side, growth in consumption and growth in investment have similar implications for national emissions. External shocks have a relatively large emissions impact, and the short-run emissions-income elasticity does not appear to decline as incomes increase. Economic growth and emissions have been more tightly linked in fossil-fuel rich countries.
Resumo:
Purpose To examine whether anterior scleral and conjunctival thickness undergoes significant diurnal variation over a 24-hour period. Methods Nineteen healthy young adults (mean age 22 ± 2 years) with minimal refractive error (mean spherical equivalent refraction -0.08 ± 0.39 D), had measures of anterior scleral and conjunctival thickness collected using anterior segment optical coherence tomography (AS-OCT) at seven measurement sessions over a 24-hour period. The thickness of the temporal anterior sclera and conjunctiva were determined at 6 locations (each separated by 0.5 mm) at varying distances from the scleral spur for each subject at each measurement session. Results Both the anterior sclera and conjunctiva were found to undergo significant diurnal variations in thickness over a 24-hour period (both p <0.01). The sclera and conjunctiva exhibited a similar pattern of diurnal change, with a small magnitude thinning observed close to midday, and a larger magnitude thickening observed in the early morning immediately after waking. The amplitude of diurnal thickness change was larger in the conjunctiva (mean amplitude 69 ± 29 μm) compared to the sclera (21 ± 8 μm). The conjunctiva exhibited its smallest magnitude of change at the scleral spur location (mean amplitude 56 ± 17 μm) whereas the sclera exhibited its largest magnitude of change at this location (52 ± 21 μm). Conclusions This study provides the first evidence of diurnal variations occurring in the thickness of the anterior sclera and conjunctiva. Studies requiring precise measures of these anatomical layers should therefore take time of day into consideration. The majority of the observed changes occurred in the early morning immediately after waking and were of larger magnitude in the conjunctiva compared to the sclera. Thickness changes at other times of the day were of smaller magnitude and generally not statistically significant.
Resumo:
In order to fully understand the process of European integration it is of paramount importance to consider developments at the sub-national and local level. EU integration scholars shifted their attention to the local level only at the beginning of the 1990s with the concept of multi-level governance (MLG). While MLG is the first concept to scrutinise the position of local levels of public administration and other actors within the EU polity, I perceive it as too optimistic in the degree of influence it ascribes to local levels. Thus, learning from and combining MLG with other concepts, such as structural constructivism, helps to reveal some of the hidden aspects of EU integration and paint a more realistic picture of multi-level interaction. This thesis also answers the call for more case studies in order to conceptualise MLG further. After a critical study of theories and concepts of European integration, above all, MLG, I will analyse sub-national and local government in Finland and Germany. I show how the sub-national level and local governments are embedded in the EU s multi-level structure of governance and how, through EU integration, those levels have been empowered but also how their scope of action has partially decreased. After theoretical and institutional contextualisation, I present the results of my empirical study of the EU s Community Initiative LEADER+. LEADER stands for Liaison Entre Actions de Développement de l'Économie Rurale , and aims at improving the economic conditions in Europe s rural areas. I was interested in how different actors construct and shape EU financed rural development, especially in how local actors organised in so-called local action groups (LAGs) cooperate with other administrative units within the LEADER+ administrative chain. I also examined intra-institutional relations within those groups, in order to find out who are the most influential and powerful actors within them. Empirical data on the Finnish and German LAGs was first gathered through a survey, which was then supplemented and completed by interviewing LAG members, LAG-managers, several civil servants from Finnish and German decision-making and managing authorities and a civil servant from the EU Commission. My main argument is that in both Germany and Finland, the Community Initiative LEADER+ offered a space for multi-level interaction and local-level involvement, a space that on the one hand consists of highly motivated people actively contributing to the improvement of the quality of life and economy in Europe s countryside but which is dependent and also restricted by national administrative practices, implementation approaches and cultures on the other. In Finland, the principle of tri-partition (kolmikantaperiaatte) in organising the executive committees of LAGs is very noticeable. In comparison to Germany, for instance, the representation of public administration in those committees is much more limited due to this principle. Furthermore, the mobilisation of local residents and the bringing together of actors from the local area with different social and institutional backgrounds to become an active part of LEADER+ was more successful in Finland than in Germany. Tri-partition as applied in Finland should serve as a model for similar policies in other EU member states. EU integration changed the formal and informal inter-institutional relations linking the different levels of government. The third sector including non-governmental institutions and interest groups gained access to policy-making processes and increasingly interact with government institutions at all levels of public administration. These developments do not necessarily result in the empowering of the local level.
Resumo:
Glial cell line-derived neurotrophic factor (GDNF) family ligands: GDNF, neurturin, persephin and artemin, signal through a receptor tyrosine kinase Ret by binding first to a co-receptor (GFRα1-4) that is attached to the plasma membrane. The GDNF family factors can support the survival of various peripheral and central neuronal populations and have important functions also outside the nervous system, especially in kidney development. Activating mutations in the RET gene cause tumours in neuroendocrine cells, whereas inactivating mutations in RET are found in patients with Hirschsprung s disease (HSCR) characterized by loss of ganglionic cells along the intestine. The aim of this study was to examine the in vivo functions of neurturin receptor GFRα2 and persephin receptor GFRα4 using knockout (KO) mice. Mice lacking GFRα2 grow poorly after weaning and have deficits in parasympathetic and enteric innervation. This study shows that impaired secretion of the salivary glands and exocrine pancreas contribute to growth retardation in GFRα2-KO mice. These mice have a reduced number of intrapancreatic neurons and decreased cholinergic innervation of the exocrine pancreas as well as reduced excitatory fibres in the myenteric plexus of the small intestine. This study also demonstrates that GFRα2-mediated Ret signalling is required for target innervation and maintenance of soma size of sympathetic cholinergic neurons and sensory nociceptive IB4-binding neurons. Furthermore, lack of GFRα2 in mice results in deficient perception of temperatures above and below thermoneutrality and in attenuated inflammatory pain response. GFRα4 is co-expressed with Ret predominantly in calcitonin-producing thyroid C-cells in the mouse. In this study GFRα4-deficient mice were generated. The mice show no gross developmental deficits and have a normal number of C-cells. However, young but not adult mice lacking GFRα4 have a lower production of calcitonin in thyroid tissue and consequently, an increased bone formation rate. Thus, GFRα4/Ret signalling may regulate calcitonin production. In conclusion, this study reveals that GFRα2/Ret signalling is crucial for the development and function of specific components of the peripheral nervous system and that GFRα4-mediated Ret signalling is required for controlling transmitter synthesis in thyroid C-cells.
Resumo:
Maltose and maltotriose are the two most abundant sugars in brewer s wort, and thus brewer s yeast s ability to utilize them efficiently is of major importance in the brewing process. The increasing tendency to utilize high and very-high-gravity worts containing increased concentrations of maltose and maltotriose renders the need for efficient transport of these sugars even more pronounced. Residual maltose and especially maltotriose are quite often present especially after high and very-high-gravity fermentations. Sugar uptake capacity has been shown to be the rate limiting factor for maltose and maltotriose utilization. The main aim of the present study was to find novel ways to improve maltose and maltotriose utilization during the main fermentation. Maltose and maltotriose uptake characteristics of several ale and lager strains were studied. Genotype determination of the genes needed for maltose and maltotriose utilization was performed. Maltose uptake inhibition studies were performed to reveal the dominant transporter types actually functioning in each of the strains. Temperature-dependence of maltose transport was studied for ale and for lager strains as well as for each of the single sugar transporter proteins Agt1p, Malx1p and Mtt1p. The AGT1 promoter regions of one ale and two lager strains were sequenced by chromosome walking and the promoter elements were searched for using computational methods. The results showed that ale and lager strains predominantly use different maltose and maltotriose transporter types for maltose and maltotriose uptake. Agt1 transporter was found to be the dominant maltose/maltotriose transporter in the ale strains whereas Malx1 and Mtt1- type transporters dominated in the lager strains. All lager strains studied were found to possess a non-functional Agt1 transporter. The ale strains were observed to be more sensitive to temperature decrease in their maltose uptake compared to the lager strains. Single transporters were observed to differ in their sensitivity to temperature decrease and their temperature-dependence was shown to decrease in the order Agt1≥Malx1>Mtt1. The different temperature-dependence between the ale and lager strains was observed to be due to the different dominant maltose/maltotriose transporters ale and lager strains possessed. The AGT1 promoter regions of ale and lager strains were found to differ markedly from the corresponding regions of laboratory strains. The ale strain was found to possess an extra MAL-activator binding site compared to the lager strains. Improved maltose and maltotriose uptake capacity was obtained with a modified lager strain where the AGT1 gene was repaired and put under the control of a strong promoter. Modified strains fermented wort faster and more completely, producing beers containing more ethanol and less residual maltose and maltotriose. Significant savings in the main fermentation time were obtained when modified strains were used. In high-gravity wort fermentations 8 20% and in very-high-gravity wort fermentations even 11 37% time savings were obtained. These are economically significant changes and would cause a marked increase in annual output from the same-size of brewhouse and fermentor facilities.
Resumo:
Combinations of cellular immune-based therapies with chemotherapy and other antitumour agents may be of significant clinical benefit in the treatment of many forms of cancer. Gamma delta (γδ) T cells are of particular interest for use in such combined therapies due to their potent antitumour cytotoxicity and relative ease of generation in vitro. Here, we demonstrate high levels of cytotoxicity against solid tumour-derived cell lines with combination treatment utilizing Vγ9Vδ2 T cells, chemotherapeutic agents and the bisphosphonate, zoledronate. Pre-treatment with low concentrations of chemotherapeutic agents or zoledronate sensitized tumour cells to rapid killing by Vγ9Vδ2 T cells with levels of cytotoxicity approaching 90%. In addition, zoledronate enhanced the chemotherapy-induced sensitization of tumour cells to Vγ9Vδ2 T cell cytotoxicity resulting in almost 100% lysis of tumour targets in some cases. Vγ9Vδ2 T cell cytotoxicity was mediated by perforin following TCR-dependent and isoprenoid-mediated recognition of tumour cells. Production of IFN-γ by Vγ9Vδ2 T cells was also induced after exposure to sensitized targets. We conclude that administration of Vγ9Vδ2 T cells at suitable intervals after chemotherapy and zoledronate may substantially increase antitumour activities in a range of malignancies.
Resumo:
In this thesis three icosahedral lipid-containing double-stranded (ds) deoxyribonucleic acid (DNA) bacteriophages have been studied: PRD1, Bam35 and P23-77. The work focuses on the entry, exit and structure of the viruses. PRD1 is the type member of the Tectiviridae family, infecting a variety of Gram-negative bacteria. The PRD1 receptor binding complex, consisting of the penton protein P31, the spike protein P5 and the receptor binding protein P2 recognizes a specific receptor on the host surface. In this study we found that the transmembrane protein P16 has an important stabilization function as the fourth member of the receptor binding complex and protein P16 may have a role in the formation of a tubular membrane structure, which is needed in the ejection of the genome into the cell. Phage Bam35 (Tectiviridae), which infects Gram-positive hosts, has been earlier found to resemble PRD1 in morphology and genome organization The uncharacterized early and late events in the Bam35 life cycle were studied by electrochemical methods. Physiological changes in the beginning of the infection were found to be similar in both lysogenic and nonlysogenic cell lines, Bam35 inducing a temporal decrease of membrane voltage and K+ efflux. At the end of the infection cycle physiological changes were observed only in the nonlysogenic cell line. The strong K+ efflux 40 min after infection and the induced premature cell lysis propose that Bam35 has a similar holin-endolysin lysis system to that of PRD1. Thermophilic icosahedral dsDNA Thermus phages P23-65H, P23-72 and P23-77 have been proposed to belong to the Tectiviridae family. In this study these phages were compared to each other. Analysis of structural protein patterns and stability revealed these phages to be very similar but not identical. The most stable of the studied viruses, P23-77, was further analyzed in more detail. Cryo-electron microscopy and three-dimensional image reconstruction was used to determine the structure of virus to 14 Å resolution. Results of thin layer chromatography for neutral lipids together with analysis of the three dimensional reconstruction of P23-77 virus particle revealed the presence of an internal lipid membrane. The overall capsid architecture of P23-77 is similar to PRD1 and Bam35, but most closely it resembles the structure of the capsid of archaeal virus SH1. This complicates the classification of dsDNA, internal lipid-containing icosahedral viruses.
Resumo:
With transplant rejection rendered a minor concern and survival rates after liver transplantation (LT) steadily improving, long-term complications are attracting more attention. Current immunosuppressive therapies, together with other factors, are accompanied by considerable long-term toxicity, which clinically manifests as renal dysfunction, high risk for cardiovascular disease, and cancer. This thesis investigates the incidence, causes, and risk factors for such renal dysfunction, cardiovascular risk, and cancer after LT. Long-term effects of LT are further addressed by surveying the quality of life and employment status of LT recipients. The consecutive patients included had undergone LT at Helsinki University Hospital from 1982 onwards. Data regarding renal function – creatinine and estimated glomerular filtration rate (GFR) – were recorded before and repeatedly after LT in 396 patients. The presence of hypertension, dyslipidemia, diabetes, impaired fasting glucose, and overweight/obesity before and 5 years after LT was determined among 77 patients transplanted for acute liver failure. The entire cohort of LT patients (540 patients), including both children and adults, was linked with the Finnish Cancer Registry, and numbers of cancers observed were compared to site-specific expected numbers based on national cancer incidence rates stratified by age, gender, and calendar time. Health-related quality of life (HRQoL), measured by the 15D instrument, and employment status were surveyed among all adult patients alive in 2007 (401 patients). The response rate was 89%. Posttransplant cardiovascular risk factor prevalence and HRQoL were compared with that in the age- and gender-matched Finnish general population. The cumulative risk for chronic kidney disease increased from 10% at 5 years to 16% at 10 years following LT. GFR up to 10 years after LT could be predicted by the GFR at 1 year. In patients transplanted for chronic liver disease, a moderate correlation of pretransplant GFR with later GFR was also evident, whereas in acute liver failure patients after LT, even severe pretransplant renal dysfunction often recovered. By 5 years after LT, 71% of acute liver failure patients were receiving antihypertensive medications, 61% were exhibiting dyslipidemia, 10% were diabetic, 32% were overweight, and 13% obese. Compared with the general population, only hypertension displayed a significantly elevated prevalence among patients – 2.7-fold – whereas patients exhibited 30% less dyslipidemia and 71% less impaired fasting glucose. The cumulative incidence of cancer was 5% at 5 years and 13% at 10. Compared with the general population, patients were subject to a 2.6-fold cancer risk, with non-melanoma skin cancer (standardized incidence ratio, SIR, 38.5) and non-Hodgkin lymphoma (SIR 13.9) being the predominant malignancies. Non-Hodgkin lymphoma was associated with male gender, young age, and the immediate posttransplant period, whereas old age and antibody induction therapy raised skin-cancer risk. HRQoL deviated clinically unimportantly from the values in the general population, but significant deficits among patients were evident in some physical domains. HRQoL did not seem to decrease with longer follow-up. Although 87% of patients reported improved working capacity, data on return to working life showed marked age-dependency: Among patients aged less than 40 at LT, 70 to 80% returned to work, among those aged 40 to 50, 55%, and among those above 50, 15% to 28%. The most common cause for unemployment was early retirement before LT. Those patients employed exhibited better HRQoL than those unemployed. In conclusion, although renal impairment, hypertension, and cancer are evidently common after LT and increase with time, patients’ quality of life remains comparable with that of the general population.
Resumo:
Background and aims. Since 1999, hospitals in the Finnish Hospital Infection Program (SIRO) have reported data on surgical site infections (SSI) following major hip and knee surgery. The purpose of this study was to obtain detailed information to support prevention efforts by analyzing SIRO data on SSIs, to evaluate possible factors affecting the surveillance results, and to assess the disease burden of postoperative prosthetic joint infections in Finland. Methods. Procedures under surveillance included total hip (THA) and total knee arthroplasties (TKA), and the open reduction and internal fixation (ORIF) of femur fractures. Hospitals prospectively collected data using common definitions and written protocol, and also performed postdischarge surveillance. In the validation study, a blinded retrospective chart review was performed and infection control nurses were interviewed. Patient charts of deep incisional and organ/space SSIs were reviewed, and data from three sources (SIRO, the Finnish Arthroplasty Register, and the Finnish Patient Insurance Centre) were linked for capture-recapture analyses. Results. During 1999-2002, the overall SSI rate was 3.3% after 11,812 orthopedic procedures (median length of stay, eight days). Of all SSIs, 56% were detected after discharge. The majority of deep incisional and organ/space SSIs (65/108, 60%) were detected on readmission. Positive and negative predictive values, sensitivity, and specificity for SIRO surveillance were 94% (95% CI, 89-99%), 99% (99-100%), 75% (56-93%), and 100% (97-100%), respectively. Of the 9,831 total joint replacements performed during 2001-2004, 7.2% (THA 5.2% and TKA 9.9%) of the implants were inserted in a simultaneous bilateral operation. Patients who underwent bilateral operations were younger, healthier, and more often males than those who underwent unilateral procedures. The rates of deep SSIs or mortality did not differ between bi- and uni-lateral THAs or TKAs. Four deep SSIs were reported following bilateral operations (antimicrobial prophylaxis administered 48-218 minutes before incision). In the three registers, altogether 129 prosthetic joint infections were identified after 13,482 THA and TKA during 1999-2004. After correction with the positive predictive value of SIRO (91%), a log-linear model provided an estimated overall prosthetic joint infection rate of 1.6% after THA and 1.3% after TKA. The sensitivity of the SIRO surveillance ranged from 36% to 57%. According to the estimation, nearly 200 prosthetic joint infections could occur in Finland each year (the average from 1999 to 2004) after THA and TKA. Conclusions. Postdischarge surveillance had a major impact on SSI rates after major hip and knee surgery. A minority of deep incisional and organ/space SSIs would be missed, however, if postdischarge surveillance by questionnaire was not performed. According to the validation study, most SSIs reported to SIRO were true infections. Some SSIs were missed, revealing some weakness in case finding. Variation in diagnostic practices may also affect SSI rates. No differences were found in deep SSI rates or mortality between bi- and unilateral THA and TKA. However, patient materials between these two groups differed. Bilateral operations require specific attention paid to their antimicrobial prophylaxis as well as to data management in the surveillance database. The true disease burden of prosthetic joint infections may be heavier than the rates from national nosocomial surveillance systems usually suggest.
Resumo:
Infection is a major cause of mortality and morbidity after thoracic organ transplantation. The aim of the present study was to evaluate the infectious complications after lung and heart transplantation, with a special emphasis on the usefulness of bronchoscopy and the demonstration of cytomegalovirus (CMV), human herpes virus (HHV)-6, and HHV-7. We reviewed all the consecutive bronchoscopies performed on heart transplant recipients (HTRs) from May 1988 to December 2001 (n = 44) and lung transplant recipients (LTRs) from February 1994 to November 2002 (n = 472). To compare different assays in the detection of CMV, a total of 21 thoracic organ transplant recipients were prospectively monitored by CMV pp65-antigenemia, DNAemia (PCR), and mRNAemia (NASBA) tests. The antigenemia test was the reference assay for therapeutic intervention. In addition to CMV antigenemia, 22 LTRs were monitored for HHV-6 and HHV-7 antigenemia. The diagnostic yield of the clinically indicated bronchoscopies was 41 % in the HTRs and 61 % in the LTRs. The utility of the bronchoscopy was highest from one to six months after transplantation. In contrast, the findings from the surveillance bronchoscopies performed on LTRs led to a change in the previous treatment in only 6 % of the cases. Pneumocystis carinii and CMV were the most commonly detected pathogens. Furthermore, 15 (65 %) of the P. carinii infections in the LTRs were detected during chemoprophylaxis. None of the complications of the bronchoscopies were fatal. Antigenemia, DNAemia, and mRNAemia were present in 98 %, 72 %, and 43 % of the CMV infections, respectively. The optimal DNAemia cut-off levels (sensitivity/specificity) were 400 (75.9/92.7 %), 850 (91.3/91.3 %), and 1250 (100/91.5 %) copies/ml for the antigenemia of 2, 5, and 10 pp65-positive leukocytes/50 000 leukocytes, respectively. The sensitivities of the NASBA were 25.9, 43.5, and 56.3 % in detecting the same cut-off levels. CMV DNAemia was detected in 93 % and mRNAemia in 61 % of the CMV antigenemias requiring antiviral therapy. HHV-6, HHV-7, and CMV antigenemia was detected in 20 (91 %), 11 (50 %), and 12 (55 %) of the 22 LTRs (median 16, 31, and 165 days), respectively. HHV-6 appeared in 15 (79 %), HHV-7 in seven (37 %), and CMV in one (7 %) of these patients during ganciclovir or valganciclovir prophylaxis. One case of pneumonitis and another of encephalitis were associated with HHV-6. In conclusion, bronchoscopy is a safe and useful diagnostic tool in LTRs and HTRs with a suspected respiratory infection, but the role of surveillance bronchoscopy in LTRs remains controversial. The PCR assay acts comparably with the antigenemia test in guiding the pre-emptive therapy against CMV when threshold levels of over 5 pp65-antigen positive leukocytes are used. In contrast, the low sensitivity of NASBA limits its usefulness. HHV-6 and HHV-7 activation is common after lung transplantation despite ganciclovir or valganciclovir prophylaxis, but clinical manifestations are infrequently linked to them.
Resumo:
Soy-derived phytoestrogen genistein and 17β-estradiol (E2), the principal endogenous estrogen in women, are also potent antioxidants protecting LDL and HDL lipoproteins against oxidation. This protection is enhanced by esterification with fatty acids, resulting in lipophilic molecules that accumulate in lipoproteins or fatty tissues. The aims were to investigate, whether genistein becomes esterified with fatty acids in human plasma accumulating in lipoproteins, and to develop a method for their quantitation; to study the antioxidant activity of different natural and synthetic estrogens in LDL and HDL; and to determine the E2 esters in visceral and subcutaneous fat in late pregnancy and in pre- and postmenopause. Human plasma was incubated with [3H]genistein and its esters were analyzed from lipoprotein fractions. Time-resolved fluoroimmunoassay (TR-FIA) was used to quantitate genistein esters in monkey plasma after subcutaneous and oral administration. The E2 esters in women s serum and adipose tissue were also quantitated using TR-FIA. The antioxidant activity of estrogen derivatives (n=43) on LDL and HDL was assessed by monitoring the copper induced formation of conjugated dienes. Human plasma was shown to produce lipoprotein-bound genistein fatty acid esters, providing a possible explanation for the previously reported increased oxidation resistance of LDL particles during intake of soybean phytoestrogens. Genistein esters were introduced into blood by subcutaneous administration. The antioxidant effect of estrogens on lipoproteins is highly structure-dependent. LDL and HDL were protected against oxidation by many unesterified, yet lipophilic derivatives. The strongest antioxidants had an unsubstituted A-ring phenolic hydroxyl group with one or two adjacent methoxy groups. E2 ester levels were high during late pregnancy. The median concentration of E2 esters in pregnancy serum was 0.42 nmol/l (n=13) and in pre- (n=8) and postmenopause (n=6) 0.07 and 0.06 nmol/l, respectively. In pregnancy visceral fat the concentration of E2 esters was 4.24 nmol/l and in pre- and postmenopause 0.82 and 0.74 nmol/l. The results from subcutaneous fat were similar. In serum and fat during pregnancy, E2 esters constituted about 0.5 and 10% of the free E2. In non-pregnant women most of the E2 in fat was esterified (the ester/free ratio 150 - 490%). In postmenopause, E2 levels in fat highly exceeded those in serum, the majority being esterified. The pathways for fatty acid esterification of steroid hormones are found in organisms ranging from invertebrates to vertebrates. The evolutionary preservation and relative abundance of E2 esters, especially in fat tissue, suggest a biological function, most likely in providing a readily available source of E2. The body s own estrogen reservoir could be used as a source of E2 by pharmacologically regulating the E2 esterification or hydrolysis.
Resumo:
Some perioperative clinical factors related to the primary cemented arthroplasty operation for osteoarthritis of the hip or knee joint are studied and discussed in this thesis. In a randomized, double-blind study, 39 patients were divided into two groups: one receiving tranexamic acid and the other not receiving it. Tranexamic acid was given in a dose of 10 mg/kg before the operation and twice thereafter, at 8-hour intervals. Total blood loss was smaller in the tranexamic acid group than in the control group. No thromboembolic complications were noticed. In a prospective, randomized study, 58 patients with hip arthroplasty and 39 patients with knee arthroplasty were divided into groups with postoperative closed-suction drainage and without drainage. There was no difference in healing of the wounds, postoperative blood transfusions, complications or range of motion. As a result of this study, the use of drains is no longer recommended. In a randomised study the effectiveness of a femoral nerve block (25 patients) was compared with other methods of pain control (24 patients) on the first postoperative day after total knee arthroplasty. The femoral block consisted of a single injection administered at patients´ bedside during the surgeon´s hospital rounds. Femoral block patients reported less pain and required half of the amount of oxycodone. Additional femoral block or continued epidural analgesia was required more frequently by the control group patients. Pain management with femoral blocks resulted in less work for nursing staff. In a retrospective study of 422 total hip and knee arthroplasty cases the C-reactive protein levels and clinical course were examined. After hip and knee arthroplasty the maximal C-reactive protein values are seen on the second and third postoperative days, after which the level decreases rapidly. There is no difference between patients with cemented or uncemented prostheses. Major postoperative complications may cause a further increase in C-reactive protein levels at one and two weeks. In-hospital and outpatient postoperative control radiographs of 200 hip and knee arthroplasties were reviewed retrospectively. If postoperative radiographs are of good quality, there seems to be no need for early repetitive radiographs. The quality and safety of follow-up is not compromised by limiting follow-up radiographs to those with clinical indications. Exposure of the patients and the staff to radiation is reduced. Reading of the radiographs by only the treating orthopaedic surgeon is enough. These factors may seem separate from each other, but linking them together may help the treating orthopaedic surgeon to adequate patient care strategy. Notable savings can be achieved.
Resumo:
Understanding mechanisms associated with the emergence of castration resistant prostate cancer cells (CRPC) after androgen deprivation therapy (ADT) is essential to create new therapeutic agents to counteract this aggressive form of prostate cancer (PCa). Because proteases are involved in almost all cancer associated mechanisms such as cell proliferation, invasion and metastasis, we are interested in their modulation in PCa after ADT and their involvement in CRPC.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Ischemic stroke (IS) is a heterogeneous disease in which outcome is influenced by many factors. The hemostatic system is activated in association with cerebral ischemia, and thus, markers measuring coagulation, fibrinolysis, and vasoactivity could be useful tools in clinical practice. We investigated whether repeated measurements of these markers reveal patterns that might help in evaluating IS patients, including the early diagnosis of stroke subtypes, in estimating prognosis and risk of recurrence, and in selecting a treatment for secondary prevention of stroke. Vasoconstrictor peptide endothelin-1 (ET-1), homocysteine (Hcy), indicators of thrombin formation and activation (prothrombin fragment 1+2/F1+2, thrombin-antithrombin complex/TAT), indicators of plasmin formation and fibrinolysis (tissue plasminogen activator/t-PA, plasminogen activator inhibitor-1/PAI-1, and D-dimer), and natural anticoagulants (antithrombin/AT, protein C/PC, and protein S/PS) were measured in 102 consecutive mild to moderate IS patients on four occasions: on admission and at 1 week, 1 month, and 3 months after stroke, and once in controls. All patients underwent neurological examination and blood sampling in the same session. Furthermore, 42 IS patients with heterozygous factor V Leiden mutation (FVLm) were selected from 740 IS patients without an obvious etiology, and evaluated in detail for specific clinical, laboratory, and radiological features. Measurements of ET-1 and Hcy levels did not disclose information that could aid in the diagnostic evaluation of IS patients. F1+2 level at 3 months after IS had a positive correlation with recurrence of thromboembolic events, and thus, may be used as a predictive marker of subsequent cerebral events. The D-dimer and AT levels on admission and 1 week after IS were strongly associated with stroke severity, outcome, and disability. The specific analysis of IS patients with FVLm more often revealed a positive family history of thrombosis, a higher prevalence of peripheral vascular disease, and multiple infarctions in brain images, most of which were `silent infarcts´. Results of this study support the view that IS patients with sustained activation of both the fibrinolytic and the coagulation systems and increased thrombin generation may have an unfavorable prognosis. The level of activation may reflect the ongoing thrombotic process and the extent of thrombosis. Changes in these markers could be useful in predicting prognosis of IS patients. A clear need exists for a randomized prospective study to determine whether a subgroup of IS patients with markers indicating activation of fibrinolytic and coagulation systems might benefit from more aggressive secondary prevention of IS.