36 resultados para after Schrader and Gersonde (1978)
Resumo:
In order to fully understand the process of European integration it is of paramount importance to consider developments at the sub-national and local level. EU integration scholars shifted their attention to the local level only at the beginning of the 1990s with the concept of multi-level governance (MLG). While MLG is the first concept to scrutinise the position of local levels of public administration and other actors within the EU polity, I perceive it as too optimistic in the degree of influence it ascribes to local levels. Thus, learning from and combining MLG with other concepts, such as structural constructivism, helps to reveal some of the hidden aspects of EU integration and paint a more realistic picture of multi-level interaction. This thesis also answers the call for more case studies in order to conceptualise MLG further. After a critical study of theories and concepts of European integration, above all, MLG, I will analyse sub-national and local government in Finland and Germany. I show how the sub-national level and local governments are embedded in the EU s multi-level structure of governance and how, through EU integration, those levels have been empowered but also how their scope of action has partially decreased. After theoretical and institutional contextualisation, I present the results of my empirical study of the EU s Community Initiative LEADER+. LEADER stands for Liaison Entre Actions de Développement de l'Économie Rurale , and aims at improving the economic conditions in Europe s rural areas. I was interested in how different actors construct and shape EU financed rural development, especially in how local actors organised in so-called local action groups (LAGs) cooperate with other administrative units within the LEADER+ administrative chain. I also examined intra-institutional relations within those groups, in order to find out who are the most influential and powerful actors within them. Empirical data on the Finnish and German LAGs was first gathered through a survey, which was then supplemented and completed by interviewing LAG members, LAG-managers, several civil servants from Finnish and German decision-making and managing authorities and a civil servant from the EU Commission. My main argument is that in both Germany and Finland, the Community Initiative LEADER+ offered a space for multi-level interaction and local-level involvement, a space that on the one hand consists of highly motivated people actively contributing to the improvement of the quality of life and economy in Europe s countryside but which is dependent and also restricted by national administrative practices, implementation approaches and cultures on the other. In Finland, the principle of tri-partition (kolmikantaperiaatte) in organising the executive committees of LAGs is very noticeable. In comparison to Germany, for instance, the representation of public administration in those committees is much more limited due to this principle. Furthermore, the mobilisation of local residents and the bringing together of actors from the local area with different social and institutional backgrounds to become an active part of LEADER+ was more successful in Finland than in Germany. Tri-partition as applied in Finland should serve as a model for similar policies in other EU member states. EU integration changed the formal and informal inter-institutional relations linking the different levels of government. The third sector including non-governmental institutions and interest groups gained access to policy-making processes and increasingly interact with government institutions at all levels of public administration. These developments do not necessarily result in the empowering of the local level.
Resumo:
Glial cell line-derived neurotrophic factor (GDNF) family ligands: GDNF, neurturin, persephin and artemin, signal through a receptor tyrosine kinase Ret by binding first to a co-receptor (GFRα1-4) that is attached to the plasma membrane. The GDNF family factors can support the survival of various peripheral and central neuronal populations and have important functions also outside the nervous system, especially in kidney development. Activating mutations in the RET gene cause tumours in neuroendocrine cells, whereas inactivating mutations in RET are found in patients with Hirschsprung s disease (HSCR) characterized by loss of ganglionic cells along the intestine. The aim of this study was to examine the in vivo functions of neurturin receptor GFRα2 and persephin receptor GFRα4 using knockout (KO) mice. Mice lacking GFRα2 grow poorly after weaning and have deficits in parasympathetic and enteric innervation. This study shows that impaired secretion of the salivary glands and exocrine pancreas contribute to growth retardation in GFRα2-KO mice. These mice have a reduced number of intrapancreatic neurons and decreased cholinergic innervation of the exocrine pancreas as well as reduced excitatory fibres in the myenteric plexus of the small intestine. This study also demonstrates that GFRα2-mediated Ret signalling is required for target innervation and maintenance of soma size of sympathetic cholinergic neurons and sensory nociceptive IB4-binding neurons. Furthermore, lack of GFRα2 in mice results in deficient perception of temperatures above and below thermoneutrality and in attenuated inflammatory pain response. GFRα4 is co-expressed with Ret predominantly in calcitonin-producing thyroid C-cells in the mouse. In this study GFRα4-deficient mice were generated. The mice show no gross developmental deficits and have a normal number of C-cells. However, young but not adult mice lacking GFRα4 have a lower production of calcitonin in thyroid tissue and consequently, an increased bone formation rate. Thus, GFRα4/Ret signalling may regulate calcitonin production. In conclusion, this study reveals that GFRα2/Ret signalling is crucial for the development and function of specific components of the peripheral nervous system and that GFRα4-mediated Ret signalling is required for controlling transmitter synthesis in thyroid C-cells.
Resumo:
Maltose and maltotriose are the two most abundant sugars in brewer s wort, and thus brewer s yeast s ability to utilize them efficiently is of major importance in the brewing process. The increasing tendency to utilize high and very-high-gravity worts containing increased concentrations of maltose and maltotriose renders the need for efficient transport of these sugars even more pronounced. Residual maltose and especially maltotriose are quite often present especially after high and very-high-gravity fermentations. Sugar uptake capacity has been shown to be the rate limiting factor for maltose and maltotriose utilization. The main aim of the present study was to find novel ways to improve maltose and maltotriose utilization during the main fermentation. Maltose and maltotriose uptake characteristics of several ale and lager strains were studied. Genotype determination of the genes needed for maltose and maltotriose utilization was performed. Maltose uptake inhibition studies were performed to reveal the dominant transporter types actually functioning in each of the strains. Temperature-dependence of maltose transport was studied for ale and for lager strains as well as for each of the single sugar transporter proteins Agt1p, Malx1p and Mtt1p. The AGT1 promoter regions of one ale and two lager strains were sequenced by chromosome walking and the promoter elements were searched for using computational methods. The results showed that ale and lager strains predominantly use different maltose and maltotriose transporter types for maltose and maltotriose uptake. Agt1 transporter was found to be the dominant maltose/maltotriose transporter in the ale strains whereas Malx1 and Mtt1- type transporters dominated in the lager strains. All lager strains studied were found to possess a non-functional Agt1 transporter. The ale strains were observed to be more sensitive to temperature decrease in their maltose uptake compared to the lager strains. Single transporters were observed to differ in their sensitivity to temperature decrease and their temperature-dependence was shown to decrease in the order Agt1≥Malx1>Mtt1. The different temperature-dependence between the ale and lager strains was observed to be due to the different dominant maltose/maltotriose transporters ale and lager strains possessed. The AGT1 promoter regions of ale and lager strains were found to differ markedly from the corresponding regions of laboratory strains. The ale strain was found to possess an extra MAL-activator binding site compared to the lager strains. Improved maltose and maltotriose uptake capacity was obtained with a modified lager strain where the AGT1 gene was repaired and put under the control of a strong promoter. Modified strains fermented wort faster and more completely, producing beers containing more ethanol and less residual maltose and maltotriose. Significant savings in the main fermentation time were obtained when modified strains were used. In high-gravity wort fermentations 8 20% and in very-high-gravity wort fermentations even 11 37% time savings were obtained. These are economically significant changes and would cause a marked increase in annual output from the same-size of brewhouse and fermentor facilities.
Resumo:
In this thesis three icosahedral lipid-containing double-stranded (ds) deoxyribonucleic acid (DNA) bacteriophages have been studied: PRD1, Bam35 and P23-77. The work focuses on the entry, exit and structure of the viruses. PRD1 is the type member of the Tectiviridae family, infecting a variety of Gram-negative bacteria. The PRD1 receptor binding complex, consisting of the penton protein P31, the spike protein P5 and the receptor binding protein P2 recognizes a specific receptor on the host surface. In this study we found that the transmembrane protein P16 has an important stabilization function as the fourth member of the receptor binding complex and protein P16 may have a role in the formation of a tubular membrane structure, which is needed in the ejection of the genome into the cell. Phage Bam35 (Tectiviridae), which infects Gram-positive hosts, has been earlier found to resemble PRD1 in morphology and genome organization The uncharacterized early and late events in the Bam35 life cycle were studied by electrochemical methods. Physiological changes in the beginning of the infection were found to be similar in both lysogenic and nonlysogenic cell lines, Bam35 inducing a temporal decrease of membrane voltage and K+ efflux. At the end of the infection cycle physiological changes were observed only in the nonlysogenic cell line. The strong K+ efflux 40 min after infection and the induced premature cell lysis propose that Bam35 has a similar holin-endolysin lysis system to that of PRD1. Thermophilic icosahedral dsDNA Thermus phages P23-65H, P23-72 and P23-77 have been proposed to belong to the Tectiviridae family. In this study these phages were compared to each other. Analysis of structural protein patterns and stability revealed these phages to be very similar but not identical. The most stable of the studied viruses, P23-77, was further analyzed in more detail. Cryo-electron microscopy and three-dimensional image reconstruction was used to determine the structure of virus to 14 Å resolution. Results of thin layer chromatography for neutral lipids together with analysis of the three dimensional reconstruction of P23-77 virus particle revealed the presence of an internal lipid membrane. The overall capsid architecture of P23-77 is similar to PRD1 and Bam35, but most closely it resembles the structure of the capsid of archaeal virus SH1. This complicates the classification of dsDNA, internal lipid-containing icosahedral viruses.
Resumo:
With transplant rejection rendered a minor concern and survival rates after liver transplantation (LT) steadily improving, long-term complications are attracting more attention. Current immunosuppressive therapies, together with other factors, are accompanied by considerable long-term toxicity, which clinically manifests as renal dysfunction, high risk for cardiovascular disease, and cancer. This thesis investigates the incidence, causes, and risk factors for such renal dysfunction, cardiovascular risk, and cancer after LT. Long-term effects of LT are further addressed by surveying the quality of life and employment status of LT recipients. The consecutive patients included had undergone LT at Helsinki University Hospital from 1982 onwards. Data regarding renal function – creatinine and estimated glomerular filtration rate (GFR) – were recorded before and repeatedly after LT in 396 patients. The presence of hypertension, dyslipidemia, diabetes, impaired fasting glucose, and overweight/obesity before and 5 years after LT was determined among 77 patients transplanted for acute liver failure. The entire cohort of LT patients (540 patients), including both children and adults, was linked with the Finnish Cancer Registry, and numbers of cancers observed were compared to site-specific expected numbers based on national cancer incidence rates stratified by age, gender, and calendar time. Health-related quality of life (HRQoL), measured by the 15D instrument, and employment status were surveyed among all adult patients alive in 2007 (401 patients). The response rate was 89%. Posttransplant cardiovascular risk factor prevalence and HRQoL were compared with that in the age- and gender-matched Finnish general population. The cumulative risk for chronic kidney disease increased from 10% at 5 years to 16% at 10 years following LT. GFR up to 10 years after LT could be predicted by the GFR at 1 year. In patients transplanted for chronic liver disease, a moderate correlation of pretransplant GFR with later GFR was also evident, whereas in acute liver failure patients after LT, even severe pretransplant renal dysfunction often recovered. By 5 years after LT, 71% of acute liver failure patients were receiving antihypertensive medications, 61% were exhibiting dyslipidemia, 10% were diabetic, 32% were overweight, and 13% obese. Compared with the general population, only hypertension displayed a significantly elevated prevalence among patients – 2.7-fold – whereas patients exhibited 30% less dyslipidemia and 71% less impaired fasting glucose. The cumulative incidence of cancer was 5% at 5 years and 13% at 10. Compared with the general population, patients were subject to a 2.6-fold cancer risk, with non-melanoma skin cancer (standardized incidence ratio, SIR, 38.5) and non-Hodgkin lymphoma (SIR 13.9) being the predominant malignancies. Non-Hodgkin lymphoma was associated with male gender, young age, and the immediate posttransplant period, whereas old age and antibody induction therapy raised skin-cancer risk. HRQoL deviated clinically unimportantly from the values in the general population, but significant deficits among patients were evident in some physical domains. HRQoL did not seem to decrease with longer follow-up. Although 87% of patients reported improved working capacity, data on return to working life showed marked age-dependency: Among patients aged less than 40 at LT, 70 to 80% returned to work, among those aged 40 to 50, 55%, and among those above 50, 15% to 28%. The most common cause for unemployment was early retirement before LT. Those patients employed exhibited better HRQoL than those unemployed. In conclusion, although renal impairment, hypertension, and cancer are evidently common after LT and increase with time, patients’ quality of life remains comparable with that of the general population.
Resumo:
Background and aims. Since 1999, hospitals in the Finnish Hospital Infection Program (SIRO) have reported data on surgical site infections (SSI) following major hip and knee surgery. The purpose of this study was to obtain detailed information to support prevention efforts by analyzing SIRO data on SSIs, to evaluate possible factors affecting the surveillance results, and to assess the disease burden of postoperative prosthetic joint infections in Finland. Methods. Procedures under surveillance included total hip (THA) and total knee arthroplasties (TKA), and the open reduction and internal fixation (ORIF) of femur fractures. Hospitals prospectively collected data using common definitions and written protocol, and also performed postdischarge surveillance. In the validation study, a blinded retrospective chart review was performed and infection control nurses were interviewed. Patient charts of deep incisional and organ/space SSIs were reviewed, and data from three sources (SIRO, the Finnish Arthroplasty Register, and the Finnish Patient Insurance Centre) were linked for capture-recapture analyses. Results. During 1999-2002, the overall SSI rate was 3.3% after 11,812 orthopedic procedures (median length of stay, eight days). Of all SSIs, 56% were detected after discharge. The majority of deep incisional and organ/space SSIs (65/108, 60%) were detected on readmission. Positive and negative predictive values, sensitivity, and specificity for SIRO surveillance were 94% (95% CI, 89-99%), 99% (99-100%), 75% (56-93%), and 100% (97-100%), respectively. Of the 9,831 total joint replacements performed during 2001-2004, 7.2% (THA 5.2% and TKA 9.9%) of the implants were inserted in a simultaneous bilateral operation. Patients who underwent bilateral operations were younger, healthier, and more often males than those who underwent unilateral procedures. The rates of deep SSIs or mortality did not differ between bi- and uni-lateral THAs or TKAs. Four deep SSIs were reported following bilateral operations (antimicrobial prophylaxis administered 48-218 minutes before incision). In the three registers, altogether 129 prosthetic joint infections were identified after 13,482 THA and TKA during 1999-2004. After correction with the positive predictive value of SIRO (91%), a log-linear model provided an estimated overall prosthetic joint infection rate of 1.6% after THA and 1.3% after TKA. The sensitivity of the SIRO surveillance ranged from 36% to 57%. According to the estimation, nearly 200 prosthetic joint infections could occur in Finland each year (the average from 1999 to 2004) after THA and TKA. Conclusions. Postdischarge surveillance had a major impact on SSI rates after major hip and knee surgery. A minority of deep incisional and organ/space SSIs would be missed, however, if postdischarge surveillance by questionnaire was not performed. According to the validation study, most SSIs reported to SIRO were true infections. Some SSIs were missed, revealing some weakness in case finding. Variation in diagnostic practices may also affect SSI rates. No differences were found in deep SSI rates or mortality between bi- and unilateral THA and TKA. However, patient materials between these two groups differed. Bilateral operations require specific attention paid to their antimicrobial prophylaxis as well as to data management in the surveillance database. The true disease burden of prosthetic joint infections may be heavier than the rates from national nosocomial surveillance systems usually suggest.
Resumo:
Infection is a major cause of mortality and morbidity after thoracic organ transplantation. The aim of the present study was to evaluate the infectious complications after lung and heart transplantation, with a special emphasis on the usefulness of bronchoscopy and the demonstration of cytomegalovirus (CMV), human herpes virus (HHV)-6, and HHV-7. We reviewed all the consecutive bronchoscopies performed on heart transplant recipients (HTRs) from May 1988 to December 2001 (n = 44) and lung transplant recipients (LTRs) from February 1994 to November 2002 (n = 472). To compare different assays in the detection of CMV, a total of 21 thoracic organ transplant recipients were prospectively monitored by CMV pp65-antigenemia, DNAemia (PCR), and mRNAemia (NASBA) tests. The antigenemia test was the reference assay for therapeutic intervention. In addition to CMV antigenemia, 22 LTRs were monitored for HHV-6 and HHV-7 antigenemia. The diagnostic yield of the clinically indicated bronchoscopies was 41 % in the HTRs and 61 % in the LTRs. The utility of the bronchoscopy was highest from one to six months after transplantation. In contrast, the findings from the surveillance bronchoscopies performed on LTRs led to a change in the previous treatment in only 6 % of the cases. Pneumocystis carinii and CMV were the most commonly detected pathogens. Furthermore, 15 (65 %) of the P. carinii infections in the LTRs were detected during chemoprophylaxis. None of the complications of the bronchoscopies were fatal. Antigenemia, DNAemia, and mRNAemia were present in 98 %, 72 %, and 43 % of the CMV infections, respectively. The optimal DNAemia cut-off levels (sensitivity/specificity) were 400 (75.9/92.7 %), 850 (91.3/91.3 %), and 1250 (100/91.5 %) copies/ml for the antigenemia of 2, 5, and 10 pp65-positive leukocytes/50 000 leukocytes, respectively. The sensitivities of the NASBA were 25.9, 43.5, and 56.3 % in detecting the same cut-off levels. CMV DNAemia was detected in 93 % and mRNAemia in 61 % of the CMV antigenemias requiring antiviral therapy. HHV-6, HHV-7, and CMV antigenemia was detected in 20 (91 %), 11 (50 %), and 12 (55 %) of the 22 LTRs (median 16, 31, and 165 days), respectively. HHV-6 appeared in 15 (79 %), HHV-7 in seven (37 %), and CMV in one (7 %) of these patients during ganciclovir or valganciclovir prophylaxis. One case of pneumonitis and another of encephalitis were associated with HHV-6. In conclusion, bronchoscopy is a safe and useful diagnostic tool in LTRs and HTRs with a suspected respiratory infection, but the role of surveillance bronchoscopy in LTRs remains controversial. The PCR assay acts comparably with the antigenemia test in guiding the pre-emptive therapy against CMV when threshold levels of over 5 pp65-antigen positive leukocytes are used. In contrast, the low sensitivity of NASBA limits its usefulness. HHV-6 and HHV-7 activation is common after lung transplantation despite ganciclovir or valganciclovir prophylaxis, but clinical manifestations are infrequently linked to them.
Resumo:
Soy-derived phytoestrogen genistein and 17β-estradiol (E2), the principal endogenous estrogen in women, are also potent antioxidants protecting LDL and HDL lipoproteins against oxidation. This protection is enhanced by esterification with fatty acids, resulting in lipophilic molecules that accumulate in lipoproteins or fatty tissues. The aims were to investigate, whether genistein becomes esterified with fatty acids in human plasma accumulating in lipoproteins, and to develop a method for their quantitation; to study the antioxidant activity of different natural and synthetic estrogens in LDL and HDL; and to determine the E2 esters in visceral and subcutaneous fat in late pregnancy and in pre- and postmenopause. Human plasma was incubated with [3H]genistein and its esters were analyzed from lipoprotein fractions. Time-resolved fluoroimmunoassay (TR-FIA) was used to quantitate genistein esters in monkey plasma after subcutaneous and oral administration. The E2 esters in women s serum and adipose tissue were also quantitated using TR-FIA. The antioxidant activity of estrogen derivatives (n=43) on LDL and HDL was assessed by monitoring the copper induced formation of conjugated dienes. Human plasma was shown to produce lipoprotein-bound genistein fatty acid esters, providing a possible explanation for the previously reported increased oxidation resistance of LDL particles during intake of soybean phytoestrogens. Genistein esters were introduced into blood by subcutaneous administration. The antioxidant effect of estrogens on lipoproteins is highly structure-dependent. LDL and HDL were protected against oxidation by many unesterified, yet lipophilic derivatives. The strongest antioxidants had an unsubstituted A-ring phenolic hydroxyl group with one or two adjacent methoxy groups. E2 ester levels were high during late pregnancy. The median concentration of E2 esters in pregnancy serum was 0.42 nmol/l (n=13) and in pre- (n=8) and postmenopause (n=6) 0.07 and 0.06 nmol/l, respectively. In pregnancy visceral fat the concentration of E2 esters was 4.24 nmol/l and in pre- and postmenopause 0.82 and 0.74 nmol/l. The results from subcutaneous fat were similar. In serum and fat during pregnancy, E2 esters constituted about 0.5 and 10% of the free E2. In non-pregnant women most of the E2 in fat was esterified (the ester/free ratio 150 - 490%). In postmenopause, E2 levels in fat highly exceeded those in serum, the majority being esterified. The pathways for fatty acid esterification of steroid hormones are found in organisms ranging from invertebrates to vertebrates. The evolutionary preservation and relative abundance of E2 esters, especially in fat tissue, suggest a biological function, most likely in providing a readily available source of E2. The body s own estrogen reservoir could be used as a source of E2 by pharmacologically regulating the E2 esterification or hydrolysis.
Resumo:
Some perioperative clinical factors related to the primary cemented arthroplasty operation for osteoarthritis of the hip or knee joint are studied and discussed in this thesis. In a randomized, double-blind study, 39 patients were divided into two groups: one receiving tranexamic acid and the other not receiving it. Tranexamic acid was given in a dose of 10 mg/kg before the operation and twice thereafter, at 8-hour intervals. Total blood loss was smaller in the tranexamic acid group than in the control group. No thromboembolic complications were noticed. In a prospective, randomized study, 58 patients with hip arthroplasty and 39 patients with knee arthroplasty were divided into groups with postoperative closed-suction drainage and without drainage. There was no difference in healing of the wounds, postoperative blood transfusions, complications or range of motion. As a result of this study, the use of drains is no longer recommended. In a randomised study the effectiveness of a femoral nerve block (25 patients) was compared with other methods of pain control (24 patients) on the first postoperative day after total knee arthroplasty. The femoral block consisted of a single injection administered at patients´ bedside during the surgeon´s hospital rounds. Femoral block patients reported less pain and required half of the amount of oxycodone. Additional femoral block or continued epidural analgesia was required more frequently by the control group patients. Pain management with femoral blocks resulted in less work for nursing staff. In a retrospective study of 422 total hip and knee arthroplasty cases the C-reactive protein levels and clinical course were examined. After hip and knee arthroplasty the maximal C-reactive protein values are seen on the second and third postoperative days, after which the level decreases rapidly. There is no difference between patients with cemented or uncemented prostheses. Major postoperative complications may cause a further increase in C-reactive protein levels at one and two weeks. In-hospital and outpatient postoperative control radiographs of 200 hip and knee arthroplasties were reviewed retrospectively. If postoperative radiographs are of good quality, there seems to be no need for early repetitive radiographs. The quality and safety of follow-up is not compromised by limiting follow-up radiographs to those with clinical indications. Exposure of the patients and the staff to radiation is reduced. Reading of the radiographs by only the treating orthopaedic surgeon is enough. These factors may seem separate from each other, but linking them together may help the treating orthopaedic surgeon to adequate patient care strategy. Notable savings can be achieved.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Ischemic stroke (IS) is a heterogeneous disease in which outcome is influenced by many factors. The hemostatic system is activated in association with cerebral ischemia, and thus, markers measuring coagulation, fibrinolysis, and vasoactivity could be useful tools in clinical practice. We investigated whether repeated measurements of these markers reveal patterns that might help in evaluating IS patients, including the early diagnosis of stroke subtypes, in estimating prognosis and risk of recurrence, and in selecting a treatment for secondary prevention of stroke. Vasoconstrictor peptide endothelin-1 (ET-1), homocysteine (Hcy), indicators of thrombin formation and activation (prothrombin fragment 1+2/F1+2, thrombin-antithrombin complex/TAT), indicators of plasmin formation and fibrinolysis (tissue plasminogen activator/t-PA, plasminogen activator inhibitor-1/PAI-1, and D-dimer), and natural anticoagulants (antithrombin/AT, protein C/PC, and protein S/PS) were measured in 102 consecutive mild to moderate IS patients on four occasions: on admission and at 1 week, 1 month, and 3 months after stroke, and once in controls. All patients underwent neurological examination and blood sampling in the same session. Furthermore, 42 IS patients with heterozygous factor V Leiden mutation (FVLm) were selected from 740 IS patients without an obvious etiology, and evaluated in detail for specific clinical, laboratory, and radiological features. Measurements of ET-1 and Hcy levels did not disclose information that could aid in the diagnostic evaluation of IS patients. F1+2 level at 3 months after IS had a positive correlation with recurrence of thromboembolic events, and thus, may be used as a predictive marker of subsequent cerebral events. The D-dimer and AT levels on admission and 1 week after IS were strongly associated with stroke severity, outcome, and disability. The specific analysis of IS patients with FVLm more often revealed a positive family history of thrombosis, a higher prevalence of peripheral vascular disease, and multiple infarctions in brain images, most of which were `silent infarcts´. Results of this study support the view that IS patients with sustained activation of both the fibrinolytic and the coagulation systems and increased thrombin generation may have an unfavorable prognosis. The level of activation may reflect the ongoing thrombotic process and the extent of thrombosis. Changes in these markers could be useful in predicting prognosis of IS patients. A clear need exists for a randomized prospective study to determine whether a subgroup of IS patients with markers indicating activation of fibrinolytic and coagulation systems might benefit from more aggressive secondary prevention of IS.
Resumo:
Staphylococcus aureus is the second most common bloodstream isolate both in community- and hospital-acquired bacteremias. The clinical course of S. aureus bacteremia (SAB) is determined by its complications, particularly by the development of deep infections and thromboembolic events. Despite the progress of antimicrobial therapy, SAB is still associated with high mortality. However, injection drug users (IDUs) tend to have fewer complications and better prognosis than nonaddicts, especially in endocarditis. The present study was undertaken to investigate epidemiology, treatment and outcome of S. aureus bacteremia and endocarditis in Finland. In particular, differences in bacterial strains and their virulence factors, and host immune responses were compared between IDUs and nonaddicts. In Finland, 5045 SAB cases during 1995-2001 were included using the National Infectious Disease Register maintained by National Public Health Institute. The annual incidence of SAB increased, especially in elderly. While the increase in incidence may partly be explained by better reporting, it most likely reflects a growing population at risk, affected by such factors as age and/or severe comorbidity. Nosocomial infections accounted for 51% of cases, with no change in their proportion during the study period. The 28-day mortality was 17% and remained unchanged over time. A total of 381 patients with SAB were randomized to receive either standard antibiotic treatment or levofloxacin added to standard treatment. Levofloxacin combination therapy did not decrease the mortality, lower the incidence of deep infections, nor did it speed up the recovery during 3 months follow-up. However, patients with a deep infection appeared to benefit from combination therapy with rifampicin, as suggested also by experimental data. Deep infections were found in 84% of SAB patients within one week after randomization, and they appeared to be more common than previously reported. Endocarditis was observed in 74 of 430 patients (17%) with SAB, of whom 20 were IDUs and 54 nonaddicts. Right-sided involvement was diagnosed in 60% of addicts whereas 93% of nonaddicts had left-sided endocarditis. Unexpectedly, IDUs showed extracardiac deep infections, thromboembolic events and severe sepsis with the same frequency as nonaddicts. The prognosis of endocarditis was better among addicts due to their younger age and lack of underlying diseases in agreement with earlier reports. In total, all 44 IDUs with SAB were included and 20 of them had endocarditis. An equal number of nonaddicts with SAB were chosen as group matched controls. Serological tests were not helpful in identifying patients with a deep infection. No individual S. aureus strain dominated in endocarditis among addicts. Characterization of the virulence factors of bacterial strains did not reveal any significant differences in IDUs and nonaddicts.
Resumo:
Heart failure is a common and highly challenging medical disorder. The progressive increase of elderly population is expected to further reflect in heart failure incidence. Recent progress in cell transplantation therapy has provided a conceptual alternative for treatment of heart failure. Despite improved medical treatment and operative possibilities, end-stage coronary artery disease present a great medical challenge. It has been estimated that therapeutic angiogenesis would be the next major advance in the treatment of ischaemic heart disease. Gene transfer to augment neovascularization could be beneficial for such patients. We employed a porcine model to evaluate the angiogenic effect of vascular endothelial growth factor (VEGF)-C gene transfer. Ameroid-generated myocardial ischemia was produced and adenovirus encoding (ad)VEGF-C or β-galactosidase (LacZ) gene therapy was given intramyocardially during progressive coronary stenosis. Angiography, positron emission tomography (PET), single photon emission computed tomography (SPECT) and histology evidenced beneficial affects of the adVEGF-C gene transfer compared to adLacZ. The myocardial deterioration during progressive coronary stenosis seen in the control group was restrained in the treatment group. We observed an uneven occlusion rate of the coronary vessels with Ameroid constrictor. We developed a simple methodological improvement of Ameroid model by ligating of the Ameroid–stenosed coronary vessel. Improvement of the model was seen by a more reliable occlusion rate of the vessel concerned and a formation of a rather constant myocardial infarction. We assessed the spontaneous healing of the left ventricle (LV) in this new model by SPECT, PET, MRI, and angiography. Significant spontaneous improvement of myocardial perfusion and function was seen as well as diminishment of scar volume. Histologically more microvessels were seen in the border area of the lesion. Double staining of the myocytes in mitosis indicated more cardiomyocyte regeneration at the remote area of the lesion. The potential of autologous myoblast transplantation after ischaemia and infarction of porcine heart was evaluated. After ligation of stenosed coronary artery, autologous myoblast transplantation or control medium was directly injected into the myocardium at the lesion area. Assessed by MRI, improvement of diastolic function was seen in the myoblast-transplanted animals, but not in the control animals. Systolic function remained unchanged in both groups.
Resumo:
The aim of the study was to clarify the occurrence, and etiological and prognostic factors of primary fallopian tube carcinoma (PFTC). We studied the sociodemographic determinants of the incidence of PFTC in Finland and the role of chlamydial infections and human papillomavirus infections as risk factors for PFTC. Serum tumor markers were studied as prognostic factors for PFTC. We also evaluated selected reproductive factors (parity, sterilization and hysterectomy) as risk or protective factors of PFTC. The risks of second primary cancers after PFTC were also studied. The age-adjusted incidence of PFTC in Finland increased to 5.4 / 1,000,000 in 1993 97. The incidence rate was higher in the cities, but the relative rise was higher in rural areas. Women in the two highest social classes showed a 1.8 fold incidence compared with those in the lowest. Women in agriculture and those not working outside the home showed only half the PFTC incidence of those in higher socioeconomic occupations. Pretreatment serum concentrations of hCGβ, CA125 and TATI were evaluated as prognostic markers for PFTC. Elevated hCGβ values (above the 75th percentile, 3.5 pmol/L; OR 2.49, 95% CI 1.22 5.09), stage and histology were strong independent prognostic factors for PFTC. The effects of parity, sterilization and hysterectomy on the risk of PFTC were studied in a case control-study with 573 PFTC cases from the Finnish Cancer Registry. In multivariate analysis parity was the only significant protective factor as regards PFTC, with increasing protection associated with increasing number of deliveries. In univariate analysis sterilization gave borderline protection against PFTC and the protective effect increased with time since the operation. In multivariate analysis the protection did not reach statistical significance. Chlamydial and human papillomavirus (HPV) infections were studied in two separate seroepidemiological case-control studies with 78 PFTC patients. The incidence of women with positive HPV or chlamydial serology was the same in PFTC patients and in the control group and was not found to be a risk factor for PFTC. Finally, the possible risk of a second primary cancer after diagnosis and treatment of PFTC in a cohort of 2084 cases from 13 cancer registries followed for second primary cancers within the period 1943 2000 was studied. In PFTC patients, second primary cancers were 36% more common than expected (SIR 1.36, 95% CI 1.13 1.63). In conclusion, the incidence of PFTC has increased in Finland, especially in higher social classes and among those in certain occupations. Elevated serum hCGβ reflect a worsened prognosis. Parity is a clear protective factor, as is previous sterilization. After PFTC there is a risk of second primary cancers, especially colorectal, breast, lung and bladder cancers and non-lymphoid leukemia. The excess of colorectal and breast cancers after PFTC may indicate common effects of earlier treatments, or they could reflect common effects of lifestyle or genetic, immunological or environmental background.
Resumo:
This is an ethnographic study of the lived worlds of the keepers of small shops in a residential neighborhood in Seoul, South Korea. It outlines, discusses, and analyses the categories and conceptualizations of South Korean capitalism at the level of households, neighborhoods, and Korean society. These cultural categories were investigated through the neighborhood shopkeepers practices of work and reciprocal interaction as well as through the shopkeepers articulations of their lived experience. In South Korea, the keepers of small businesses have continued to be a large occupational category despite of societal and economic changes, occupying approximately one fourth of the population in active work force. In spite of that, these people, their livelihoods and their cultural and social worlds have rarely been in the focus of social science inquiry. The ethnographic field research for this study was conducted during a 14-month period between November 1998 and December 1999 and in three subsequent short visits to Korea and to the research neighborhood. The fieldwork was conducted during the aftermath of the Asian currency crisis, colloquially termed at the time as the IMF crisis, which highlighted the social and cultural circumstances of small businesskeeper in a specific way. The livelihoods of small-scale entrepreneurs became even more precarious than before; self-employment became an involuntary choice for many middle-class salaried employees who were laid off; and the cultural categories and concepts of society and economy South Korean capitalism were articulated more sharply than before. This study begins with an overview of the contemporary setting, the Korean society under the socially and economically painful outcomes of the economic crisis, and continues with an overview of relevant literature. After introducing the research area and the informants, I discuss the Korean notion of neighborhood, which incorporates both the notions of culturally valued Koreanness and deficiency in the sense of modernity and development. This study further analyses the ways in which the businesskeepers appropriate and reproduce the Korean ideas of men s and women s gender roles and spheres of work. As the appropriation of children s labor is conditional to intergenerational family trajectories which aim not to reproduce parents occupational status but to gain entry to salaried occupations via educational credentials, the work of a married couple is the most common organization of work in small businesses, to which the Korean ideas of family and kin continuity are not applied. While the lack of generational businesskeeping succession suggests that the proprietors mainly subscribe to the notions of familial status that emanate from the practices of the white-collar middle class, the cases of certain women shopkeepers show that their proprietorship and the ensuing economic standing in the family prompts and invites inversed interpretations and uses of common cultural notions of gender. After discussing and analyzing the concept of money and the cultural categorization of leisure and work, topics that emerged as very significant in the lived world of the shopkeepers, this study charts and analyses the categories of identification which the shopkeepers employ for their cultural and social locations and identities. Particular attention is paid to the idea of ordinary people (seomin), which shopkeepers are commonly considered to be most representative of, and which also sums up the ambivalence of neighborhood shopkeepers as a social category: they are not committed to familial reproduction and continuity of the business but aspire non-entrepreneurial careers for their children, while they occupy a significant position in the elaborations of culturally valued notions and ideologies defining Koreanness such as warmheartedness and sociability.