58 resultados para ML Pohjanmaa
Resumo:
Increased anthropogenic loading of nitrogen (N) and phosphorus (P) has led to an eutrophication problem in the Baltic Sea, and the spring bloom is a key component in the biological uptake of increased nutrient concentrations. The spring bloom in the Baltic Sea is dominated by both diatoms and dinoflagellates. However, the sedimentation of these groups is different: diatoms tend to sink to the sea floor at the end of the bloom, while dinoflagellates to a large degree are been remineralized in the euphotic zone. Understanding phytoplankton competition and species specific ecological strategies is thus of importance for assessing indirect effects of phytoplankton community composition on eutrophication problems. The main objective of this thesis was to describe some basic physiological and ecological characteristics of the main cold-water diatoms and dinoflagellates in the Baltic Sea. This was achieved by specific studies of: (1) seasonal vertical positioning, (2) dinoflagellate life cycle, (3) mixotrophy, (4) primary production, respiration and growth and (5) diatom silicate uptake, using cultures of common cold-water diatoms: Chaetoceros wighamii, C. gracilis, Pauliella taeniata, Thalassiosira baltica, T. levanderi, Melosira arctica, Diatoma tenuis, Nitzschia frigida, and dinoflagellates: Peridiniella catenata, Woloszynskia halophila and Scrippsiella hangoei. The diatoms had higher primary production capacity and lower respiration rate compared with the dinoflagellates. This difference was reflected in the maximum growth rate, which for the examined diatoms range from 0.6 to 1.2 divisions d-1, compared with 0.2 to 0.3 divisions d-1 for the dinoflagellates. Among diatoms there were species specific differences in light utilization and uptake of silicate, and C. wighamii had the highest carbon assimilation capacity and maximum silicate uptake. The physiological properties of diatoms and dinoflagellates were used in a model of the onset of the spring bloom: for the diatoms the model could predict the initiation of the spring bloom; S. hangoei, on the other hand, could not compete successfully and did not obtain positive growth in the model. The other dinoflagellates did not have higher growth rates or carbon assimilation rates and would thus probably not perform better than S. hangoei in the model. The dinoflagellates do, however, have competitive advantages that were not included in the model: motility and mixotrophy. Previous investigations has revealed that the chain-forming P. catenata performs diurnal vertical migration (DVM), and the results presented here suggest that active positioning in the water column, in addition to DVM, is a key element in this species' life strategy. There was indication of mixotrophy in S. hangoei, as it produced and excreted the enzyme leucine aminopeptidase (LAP). Moreover, there was indirect evidence that W. halophila obtains carbon from other sources than photosynthesis when comparing increase in cell numbers with in situ carbon assimilation rates. The results indicate that mixotrophy is a part of the strategy of vernal dinoflagellates in the Baltic Sea. There were also indications that the seeding of the spring bloom is very important for the dinoflagellates to succeed. In mesocosm experiments dinoflagellates could not compete with diatoms when their initial numbers were low. In conclusion, this thesis has provided new information about the basic physiological and ecological properties of the main cold-water phytoplankton in the Baltic Sea. The main phytoplankton groups, diatoms and dinoflagellates, have different physiological properties, which clearly separate their life strategies. The information presented here could serve as further steps towards better prognostic models of the effects of eutrophication in the Baltic Sea.
Resumo:
Human parvovirus B19 is a minute ssDNA virus causing a wide variety of diseases, including erythema infectiosum, arthropathy, anemias, and fetal death. After primary infection, genomic DNA of B19 has been shown to persist in solid tissues of not only symptomatic but also of constitutionally healthy, immunocompetent individuals. In this thesis, the viral DNA was shown to persist as an apparently intact molecule of full length, and without persistence-specific mutations. Thus, although the mere presence of B19 DNA in tissue can not be used as a diagnostic criterion, a possible role in the pathogenesis of diseases e.g. through mRNA or protein production can not be excluded. The molecular mechanism, the host-cell type and the possible clinical significance of B19 DNA tissue persistence are yet to be elucidated. In the beginning of this work, the B19 genomic sequence was considered highly conserved. However, new variants were found: V9 was detected in 1998 in France, in serum of a child with aplastic crisis. This variant differed from the prototypic B19 sequences by ~10 %. In 2002 we found, persisting in skin of constitutionally healthy humans, DNA of another novel B19 variant, LaLi. Genetically this variant differed from both the prototypic sequences and the variant V9 also by ~10%. Simultaneously, B19 isolates with DNA sequences similar to LaLi were introduced by two other groups, in the USA and France. Based on phylogeny, a classification scheme based on three genotypes (B19 types 1-3) was proposed. Although the B19 virus is mainly transmitted via the respiratory route, blood and plasma-derived products contaminated with high levels of B19 DNA have also been shown to be infectious. The European Pharmacopoeia stipulates that, in Europe, from the beginning of 2004, plasma pools for manufacture must contain less than 104 IU/ml of B19 DNA. Quantitative PCR screening is therefore a prerequisite for restriction of the B19 DNA load and obtaining of safe plasma products. Due to the DNA sequence variation among the three B19 genotypes, however, B19 PCR methods might fail to detect the new variants. We therefore examined the suitability of the two commercially available quantitative B19 PCR tests, LightCycler-Parvovirus B19 quantification kit (Roche Diagnostics) and RealArt Parvo B19 LC PCR (Artus), for detection, quantification and differentiation of the three B19 types known, including B19 types 2 and 3. The former method was highly sensitive for detection of the B19 prototype but was not suitable for detection of types 2 and 3. The latter method detected and differentiated all three B19 virus types. However, one of the two type-3 strains was detected at a lower sensitivity. Then, we assessed the prevalence of the three B19 virus types among Finnish blood donors, by screening pooled plasma samples derived from >140 000 blood-donor units: none of the pools contained detectable levels of B19 virus types 2 or 3. According to the results of other groups, B19 type 2 was absent also among Danish blood-donors, and extremely rare among symptomatic European patients. B19 type 3 has been encountered endemically in Ghana and (apparently) in Brazil, and sporadical cases have been detected in France and the UK. We next examined the biological characteristics of these virus types. The p6 promoter regions of virus types 1-3 were cloned in front of a reporter gene, the constructs were transfected into different cell lines, and the promoter activities were measured. As a result, we found that the activities of the three p6 promoters, although differing in sequence by >20%, were of equal strength, and most active in B19-permissive cells. Furthermore, the infectivity of the three B19 types was examined in two B19-permissive cell lines. RT-PCR revealed synthesis of spliced B19 mRNAs, and immunofluorescence verified the production of NS1 and VP proteins in the infected cells. These experiments suggested similar host-cell tropism and showed that the three virus types are strains of the same species, i.e. human parvovirus B19. Last but not least, the sera from subjects infected in the past either with B19 type 1 or type 2 (as evidenced by tissue persistence of the respective DNAs), revealed in VP1/2- and VP2-EIAs a 100 % cross-reactivity between virus types 1 and 2. These results, together with similar studies by others, indicate that the three B19 genotypes constitute a single serotype.
Resumo:
Background: Patients may need massive volume-replacement therapy after cardiac surgery because of large fluid transfer perioperatively, and the use of cardiopulmonary bypass. Hemodynamic stability is better maintained with colloids than crystalloids but colloids have more adverse effects such as coagulation disturbances and impairment of renal function than do crystalloids. The present study examined the effects of modern hydroxyethyl starch (HES) and gelatin solutions on blood coagulation and hemodynamics. The mechanism by which colloids disturb blood coagulation was investigated by thromboelastometry (TEM) after cardiac surgery and in vitro by use of experimental hemodilution. Materials and methods: Ninety patients scheduled for elective primary cardiac surgery (Studies I, II, IV, V), and twelve healthy volunteers (Study III) were included in this study. After admission to the cardiac surgical intensive care unit (ICU), patients were randomized to receive different doses of HES 130/0.4, HES 200/0.5, or 4% albumin solutions. Ringer’s acetate or albumin solutions served as controls. Coagulation was assessed by TEM, and hemodynamic measurements were based on thermodilutionally measured cardiac index (CI). Results: HES and gelatin solutions impaired whole blood coagulation similarly as measured by TEM even at a small dose of 7 mL/kg. These solutions reduced clot strength and prolonged clot formation time. These effects were more pronounced with increasing doses of colloids. Neither albumin nor Ringer’s acetate solution disturbed blood coagulation significantly. Coagulation disturbances after infusion of HES or gelatin solutions were clinically slight, and postoperative blood loss was comparable with that of Ringer’s acetate or albumin solutions. Both single and multiple doses of all the colloids increased CI postoperatively, and this effect was dose-dependent. Ringer’s acetate had no effect on CI. At a small dose (7 mL/kg), the effect of gelatin on CI was comparable with that of Ringer’s acetate and significantly less than that of HES 130/0.4 (Study V). However, when the dose was increased to 14 and 21 mL/kg, the hemodynamic effect of gelatin rose and became comparable with that of HES 130/0.4. Conclusions: After cardiac surgery, HES and gelatin solutions impaired clot strength in a dose-dependent manner. The potential mechanisms were interaction with fibrinogen and fibrin formation, resulting in decreased clot strength, and hemodilution. Although the use of HES and gelatin inhibited coagulation, postoperative bleeding on the first postoperative morning in all the study groups was similar. A single dose of HES solutions improved CI postoperatively more than did gelatin, albumin, or Ringer’s acetate. However, when administered in a repeated fashion, (cumulative dose of 14 mL/kg or more), no differences were evident between HES 130/0.4 and gelatin.
Resumo:
The purpose of the present study was to examine the outcome of pregnancies among HIV-infected women in Helsinki, use of the levonorgestrel-releasing intrauterine system (LNG-IUS) among HIV-infected women and the prevalence and risk factors of cytological and histologically proven cervical lesions in this population. Between 1993 and 2003 a total of 45 HIV-infected women delivered 52 singleton infants. HIV infection was diagnosed during pregnancy in 40% of the mothers. Seventeen of the mothers received antiretroviral (ARV) medication prior to pregnancy and in 34 cases, the medication was started during pregnancy. A good virological response (i.e. HIV RNA load <1000/mL during the last trimester) to ARV medication was achieved in 36/40 (90%) of the patients in whom HI viral load measurements were performed. Of the infants, 92% were born at term, and their mean (±SD) birth weight was 3350±395 g. The Caesarean section rate was low, 25%. All newborns received ARV medication and none of the infants born to mothers with pre-delivery diagnosis of maternal HIV infection were infected. The safety and advantages of the LNG-IUS were studied prospectively (n=12) and retrospectively (n=6). The LNG-IUS was well tolerated and no cases of PID or pregnancy were noted. Menstrual bleeding was reduced significantly during use of the LNG-IUS; this was associated with a slight increase in haemoglobin levels. Serum oestradiol concentrations remained in the follicular range in all subjects. The key finding was that genital shedding of HIV RNA did not change after the insertion of the LNG-IUS. The mean annual prevalence of low-grade squamous intraepithelial lesions (SIL) was 15% and that of high-grade SIL was 5% among 108 systematically followed HIV-infected women during 1989 2003. A reduced CD4 lymphocyte count was associated with an increased prevalence of SIL, whereas duration of HIV infection, use of ARV medication and HI viral load were not. The cumulative risk of any type of SIL was 17% after one year and 48% after five years among patients with initially normal Pap smears. The risk of developing SIL was associated with young age and a high initial HI viral load. During the follow-up 51 subjects (n=153) displayed cervical intraepithelial neoplasia (CIN), (16% CIN1 and 18% CIN 2-3). Only one case of cancer of the uterine cervix was detected. Pap smears were reliable in screening for CIN. Both nulliparity (p<0.01) and bacterial vaginosis (p<0.04) emerged as significant risk factors of CIN. In conclusion, a combination of universal antenatal screening and multidisciplinary management allows individualized treatment and prevents vertical transmission of HIV. Use of the LNG-IUS is safe among HIV-infected women and cervicovaginal shedding of HIV RNA is not affected by use of the LNG-IUS. The risk of cervical pre-malignant lesions is high among HIV-infected women despite systematic follow-up.
Resumo:
Spirometry is the most widely used lung function test in the world. It is fundamental in diagnostic and functional evaluation of various pulmonary diseases. In the studies described in this thesis, the spirometric assessment of reversibility of bronchial obstruction, its determinants, and variation features are described in a general population sample from Helsinki, Finland. This study is a part of the FinEsS study, which is a collaborative study of clinical epidemiology of respiratory health between Finland (Fin), Estonia (Es), and Sweden (S). Asthma and chronic obstructive pulmonary disease (COPD) constitute the two major obstructive airways diseases. The prevalence of asthma has increased, with around 6% of the population in Helsinki reporting physician-diagnosed asthma. The main cause of COPD is smoking with changes in smoking habits in the population affecting its prevalence with a delay. Whereas airway obstruction in asthma is by definition reversible, COPD is characterized by fixed obstruction. Cough and sputum production, the first symptoms of COPD, are often misinterpreted for smokers cough and not recognized as first signs of a chronic illness. Therefore COPD is widely underdiagnosed. More extensive use of spirometry in primary care is advocated to focus smoking cessation interventions on populations at risk. The use of forced expiratory volume in six seconds (FEV6) instead of forced vital capacity (FVC) has been suggested to enable office spirometry to be used in earlier detection of airflow limitation. Despite being a widely accepted standard method of assessment of lung function, the methodology and interpretation of spirometry are constantly developing. In 2005, the ATS/ERS Task Force issued a joint statement which endorsed the 12% and 200 ml thresholds for significant change in forced expiratory volume in one second (FEV1) or FVC during bronchodilation testing, but included the notion that in cases where only FVC improves it should be verified that this is not caused by a longer exhalation time in post-bronchodilator spirometry. This elicited new interest in the assessment of forced expiratory time (FET), a spirometric variable not usually reported or used in assessment. In this population sample, we examined FET and found it to be on average 10.7 (SD 4.3) s and to increase with ageing and airflow limitation in spirometry. The intrasession repeatability of FET was the poorest of the spirometric variables assessed. Based on the intrasession repeatability, a limit for significant change of 3 s was suggested for FET during bronchodilation testing. FEV6 was found to perform equally well as FVC in the population and in a subgroup of subjects with airways obstruction. In the bronchodilation test, decreases were frequently observed in FEV1 and particularly in FVC. The limit of significant increase based on the 95th percentile of the population sample was 9% for FEV1 and 6% for FEV6 and FVC; these are slightly lower than the current limits for single bronchodilation tests (ATS/ERS guidelines). FEV6 was proven as a valid alternative to FVC also in the bronchodilation test and would remove the need to control duration of exhalation during the spirometric bronchodilation test.
Resumo:
Infection is a major cause of mortality and morbidity after thoracic organ transplantation. The aim of the present study was to evaluate the infectious complications after lung and heart transplantation, with a special emphasis on the usefulness of bronchoscopy and the demonstration of cytomegalovirus (CMV), human herpes virus (HHV)-6, and HHV-7. We reviewed all the consecutive bronchoscopies performed on heart transplant recipients (HTRs) from May 1988 to December 2001 (n = 44) and lung transplant recipients (LTRs) from February 1994 to November 2002 (n = 472). To compare different assays in the detection of CMV, a total of 21 thoracic organ transplant recipients were prospectively monitored by CMV pp65-antigenemia, DNAemia (PCR), and mRNAemia (NASBA) tests. The antigenemia test was the reference assay for therapeutic intervention. In addition to CMV antigenemia, 22 LTRs were monitored for HHV-6 and HHV-7 antigenemia. The diagnostic yield of the clinically indicated bronchoscopies was 41 % in the HTRs and 61 % in the LTRs. The utility of the bronchoscopy was highest from one to six months after transplantation. In contrast, the findings from the surveillance bronchoscopies performed on LTRs led to a change in the previous treatment in only 6 % of the cases. Pneumocystis carinii and CMV were the most commonly detected pathogens. Furthermore, 15 (65 %) of the P. carinii infections in the LTRs were detected during chemoprophylaxis. None of the complications of the bronchoscopies were fatal. Antigenemia, DNAemia, and mRNAemia were present in 98 %, 72 %, and 43 % of the CMV infections, respectively. The optimal DNAemia cut-off levels (sensitivity/specificity) were 400 (75.9/92.7 %), 850 (91.3/91.3 %), and 1250 (100/91.5 %) copies/ml for the antigenemia of 2, 5, and 10 pp65-positive leukocytes/50 000 leukocytes, respectively. The sensitivities of the NASBA were 25.9, 43.5, and 56.3 % in detecting the same cut-off levels. CMV DNAemia was detected in 93 % and mRNAemia in 61 % of the CMV antigenemias requiring antiviral therapy. HHV-6, HHV-7, and CMV antigenemia was detected in 20 (91 %), 11 (50 %), and 12 (55 %) of the 22 LTRs (median 16, 31, and 165 days), respectively. HHV-6 appeared in 15 (79 %), HHV-7 in seven (37 %), and CMV in one (7 %) of these patients during ganciclovir or valganciclovir prophylaxis. One case of pneumonitis and another of encephalitis were associated with HHV-6. In conclusion, bronchoscopy is a safe and useful diagnostic tool in LTRs and HTRs with a suspected respiratory infection, but the role of surveillance bronchoscopy in LTRs remains controversial. The PCR assay acts comparably with the antigenemia test in guiding the pre-emptive therapy against CMV when threshold levels of over 5 pp65-antigen positive leukocytes are used. In contrast, the low sensitivity of NASBA limits its usefulness. HHV-6 and HHV-7 activation is common after lung transplantation despite ganciclovir or valganciclovir prophylaxis, but clinical manifestations are infrequently linked to them.
Resumo:
Continuous epidural analgesia (CEA) and continuous spinal postoperative analgesia (CSPA) provided by a mixture of local anaesthetic and opioid are widely used for postoperative pain relief. E.g., with the introduction of so-called microcatheters, CSPA found its way particularly in orthopaedic surgery. These techniques, however, may be associated with dose-dependent side-effects as hypotension, weakness in the legs, and nausea and vomiting. At times, they may fail to offer sufficient analgesia, e.g., because of a misplaced catheter. The correct position of an epidural catheter might be confirmed by the supposedly easy and reliable epidural stimulation test (EST). The aims of this thesis were to determine a) whether the efficacy, tolerability, and reliability of CEA might be improved by adding the α2-adrenergic agonists adrenaline and clonidine to CEA, and by the repeated use of EST during CEA; and, b) the feasibility of CSPA given through a microcatheter after vascular surgery. Studies I IV were double-blinded, randomized, and controlled trials; Study V was of a diagnostic, prospective nature. Patients underwent arterial bypass surgery of the legs (I, n=50; IV, n=46), total knee arthroplasty (II, n=70; III, n=72), and abdominal surgery or thoracotomy (V, n=30). Postoperative lumbar CEA consisted of regular mixtures of ropivacaine and fentanyl either without or with adrenaline (2 µg/ml (I) and 4 µg/ml (II)) and clonidine (2 µg/ml (III)). CSPA (IV) was given through a microcatheter (28G) and contained either ropivacaine (max. 2 mg/h) or a mixture of ropivacaine (max. 1 mg/h) and morphine (max. 8 µg/h). Epidural catheter tip position (V) was evaluated both by EST at the moment of catheter placement and several times during CEA, and by epidurography as reference diagnostic test. CEA and CSPA were administered for 24 or 48 h. Study parameters included pain scores assessed with a visual analogue scale, requirements of rescue pain medication, vital signs, and side-effects. Adrenaline (I and II) had no beneficial influence as regards the efficacy or tolerability of CEA. The total amounts of epidurally-infused drugs were even increased in the adrenaline group in Study II (p=0.02, RM ANOVA). Clonidine (III) augmented pain relief with lowered amounts of epidurally infused drugs (p=0.01, RM ANOVA) and reduced need for rescue oxycodone given i.m. (p=0.027, MW-U; median difference 3 mg (95% CI 0 7 mg)). Clonidine did not contribute to sedation and its influence on haemodynamics was minimal. CSPA (IV) provided satisfactory pain relief with only limited blockade of the legs (no inter-group differences). EST (V) was often related to technical problems and difficulties of interpretation, e.g., it failed to identify the four patients whose catheters were outside the spinal canal already at the time of catheter placement. As adjuvants to lumbar CEA, clonidine only slightly improved pain relief, while adrenaline did not provide any benefit. The role of EST applied at the time of epidural catheter placement or repeatedly during CEA remains open. The microcatheter CSPA technique appeared effective and reliable, but needs to be compared to routine CEA after peripheral arterial bypass surgery.
Resumo:
Background: The incidence of all forms of congenital heart defects is 0.75%. For patients with congenital heart defects, life-expectancy has improved with new treatment modalities. Structural heart defects may require surgical or catheter treatment which may be corrective or palliative. Even those with corrective therapy need regular follow-up due to residual lesions, late sequelae, and possible complications after interventions. Aims: The aim of this thesis was to evaluate cardiac function before and after treatment for volume overload of the right ventricle (RV) caused by atrial septal defect (ASD), volume overload of the left ventricle (LV) caused by patent ductus arteriosus (PDA), and pressure overload of the LV caused by coarctation of the aorta (CoA), and to evaluate cardiac function in patients with Mulibrey nanism. Methods: In Study I, of the 24 children with ASD, 7 underwent surgical correction and 17 percutaneous occlusion of ASD. Study II had 33 patients with PDA undergoing percutaneous occlusion. In Study III, 28 patients with CoA underwent either surgical correction or percutaneous balloon dilatation of CoA. Study IV comprised 26 children with Mulibrey nanism. A total of 76 healthy voluntary children were examined as a control group. In each study, controls were matched to patients. All patients and controls underwent clinical cardiovascular examinations, two-dimensional (2D) and three-dimensional (3D) echocardiographic examinations, and blood sampling for measurement of natriuretic peptides prior to the intervention and twice or three times thereafter. Control children were examined once by 2D and 3D echocardiography. M-mode echocardiography was performed from the parasternal long axis view directed by 2D echocardiography. The left atrium-to-aorta (LA/Ao) ratio was calculated as an index of LA size. The end-diastolic and end-systolic dimensions of LV as well as the end-diastolic thicknesses of the interventricular septum and LV posterior wall were measured. LV volumes, and the fractional shortening (FS) and ejection fraction (EF) as indices of contractility were then calculated, and the z scores of LV dimensions determined. Diastolic function of LV was estimated from the mitral inflow signal obtained by Doppler echocardiography. In three-dimensional echocardiography, time-volume curves were used to determine end-diastolic and end-systolic volumes, stroke volume, and EF. Diastolic and systolic function of LV was estimated from the calculated first derivatives of these curves. Results: (I): In all children with ASD, during the one-year follow-up, the z score of the RV end-diastolic diameter decreased and that of LV increased. However, dilatation of RV did not resolve entirely during the follow-up in either treatment group. In addition, the size of LV increased more slowly in the surgical subgroup but reached control levels in both groups. Concentrations of natriuretic peptides in patients treated percutaneously increased during the first month after ASD closure and normalized thereafter, but in patients treated surgically, they remained higher than in controls. (II): In the PDA group, at baseline, the end-diastolic diameter of LV measured over 2SD in 5 of 33 patients. The median N-terminal pro-brain natriuretic peptide (proBNP) concentration before closure measured 72 ng/l in the control group and 141 ng/l in the PDA group (P = 0.001) and 6 months after closure measured 78.5 ng/l (P = NS). Patients differed from control subjects in indices of LV diastolic and systolic function at baseline, but by the end of follow-up, all these differences had disappeared. Even in the subgroup of patients with normal-sized LV at baseline, the LV end-diastolic volume decreased significantly during follow-up. (III): Before repair, the size and wall thickness of LV were higher in patients with CoA than in controls. Systolic blood pressure measured a median 123 mm Hg in patients before repair (P < 0.001) and 103 mm Hg one year thereafter, and 101 mm Hg in controls. The diameter of the coarctation segment measured a median 3.0 mm at baseline, and 7.9 at the 12-month (P = 0.006) follow-up. Thicknesses of the interventricular septum and posterior wall of the LV decreased after repair but increased to the initial level one year thereafter. The velocity time integrals of mitral inflow increased, but no changes were evident in LV dimensions or contractility. During follow-up, serum levels of natriuretic peptides decreased correlating with diastolic and systolic indices of LV function in 2D and 3D echocardiography. (IV): In 2D echocardiography, the interventricular septum and LV posterior wall were thicker, and velocity time integrals of mitral inflow shorter in patients with Mulibrey nanism than in controls. In 3D echocardiography, LV end-diastolic volume measured a median 51.9 (range 33.3 to 73.4) ml/m² in patients and 59.7 (range 37.6 to 87.6) ml/m² in controls (P = 0.040), and serum levels of ANPN and proBNP a median 0.54 (range 0.04 to 4.7) nmol/l and 289 (range 18 to 9170) ng/l, in patients and 0.28 (range 0.09 to 0.72) nmol/l (P < 0.001) and 54 (range 26 to 139) ng/l (P < 0.001) in controls. They correlated with several indices of diastolic LV function. Conclusions (I): During the one-year follow-up after the ASD closure, RV size decreased but did not normalize in all patients. The size of the LV normalized after ASD closure but the increase in LV size was slower in patients treated surgically than in those treated with the percutaneous technique. Serum levels of ANPN and proBNP were elevated prior to ASD closure but decreased thereafter to control levels in patients treated with the percutaneous technique but not in those treated surgically. (II): Changes in LV volume and function caused by PDA disappeared by 6 months after percutaneous closure. Even the children with normal-sized LV benefited from the procedure. (III): After repair of CoA, the RV size and the velocity time integrals of mitral inflow increased, and serum levels of natriuretic peptides decreased. Patients need close follow-up, despite cessation of LV pressure overload, since LV hypertrophy persisted even in normotensive patients with normal growth of the coarctation segment. (IV): In children with Mulibrey nanism, the LV wall was hypertrophied, with myocardial restriction and impairment of LV function. Significant correlations appeared between indices of LV function, size of the left atrium, and levels of natriuretic peptides, indicating that measurement of serum levels of natriuretic peptides can be used in the clinical follow-up of this patient group despite its dependence on loading conditions.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Pediatric renal transplantation (TX) has evolved greatly during the past few decades, and today TX is considered the standard care for children with end-stage renal disease. In Finland, 191 children had received renal transplants by October 2007, and 42% of them have already reached adulthood. Improvements in treatment of end-stage renal disease, surgical techniques, intensive care medicine, and in immunosuppressive therapy have paved the way to the current highly successful outcomes of pediatric transplantation. In children, the transplanted graft should last for decades, and normal growth and development should be guaranteed. These objectives set considerable requirements in optimizing and fine-tuning the post-operative therapy. Careful optimization of immunosuppressive therapy is crucial in protecting the graft against rejection, but also in protecting the patient against adverse effects of the medication. In the present study, the results of a retrospective investigation into individualized dosing of immunosuppresive medication, based on pharmacokinetic profiles, therapeutic drug monitoring, graft function and histology studies, and glucocorticoid biological activity determinations, are reported. Subgroups of a total of 178 patients, who received renal transplants in 1988 2006 were included in the study. The mean age at TX was 6.5 years, and approximately 26% of the patients were <2 years of age. The most common diagnosis leading to renal TX was congenital nephrosis of the Finnish type (NPHS1). Pediatric patients in Finland receive standard triple immunosuppression consisting of cyclosporine A (CsA), methylprednisolone (MP) and azathioprine (AZA) after renal TX. Optimal dosing of these agents is important to prevent rejections and preserve graft function in one hand, and to avoid the potentially serious adverse effects on the other hand. CsA has a narrow therapeutic window and individually variable pharmacokinetics. Therapeutic monitoring of CsA is, therefore, mandatory. Traditionally, CsA monitoring has been based on pre-dose trough levels (C0), but recent pharmacokinetic and clinical studies have revealed that the immunosuppressive effect may be related to diurnal CsA exposure and blood CsA concentration 0-4 hours after dosing. The two-hour post-dose concentration (C2) has proved a reliable surrogate marker of CsA exposure. Individual starting doses of CsA were analyzed in 65 patients. A recommended dose based on a pre-TX pharmacokinetic study was calculated for each patient by the pre-TX protocol. The predicted dose was clearly higher in the youngest children than in the older ones (22.9±10.4 and 10.5±5.1 mg/kg/d in patients <2 and >8 years of age, respectively). The actually administered oral doses of CsA were collected for three weeks after TX and compared to the pharmacokinetically predicted dose. After the TX, dosing of CsA was adjusted according to clinical parameters and blood CsA trough concentration. The pharmacokinetically predicted dose and patient age were the two significant parameters explaining post-TX doses of CsA. Accordingly, young children received significantly higher oral doses of CsA than the older ones. The correlation to the actually administered doses after TX was best in those patients, who had a predicted dose clearly higher or lower (> ±25%) than the average in their age-group. Due to the great individual variation in pharmacokinetics standardized dosing of CsA (based on body mass or surface area) may not be adequate. Pre-Tx profiles are helpful in determining suitable initial CsA doses. CsA monitoring based on trough and C2 concentrations was analyzed in 47 patients, who received renal transplants in 2001 2006. C0, C2 and experienced acute rejections were collected during the post-TX hospitalization, and also three months after TX when the first protocol core biopsy was obtained. The patients who remained rejection free had slightly higher C2 concentrations, especially very early after TX. However, after the first two weeks also the trough level was higher in the rejection-free patients than in those with acute rejections. Three months after TX the trough level was higher in patients with normal histology than in those with rejection changes in the routine biopsy. Monitoring of both the trough level and C2 may thus be warranted to guarantee sufficient peak concentration and baseline immunosuppression on one hand and to avoid over-exposure on the other hand. Controlling of rejection in the early months after transplantation is crucial as it may contribute to the development of long-term allograft nephropathy. Recently, it has become evident that immunoactivation fulfilling the histological criteria of acute rejection is possible in a well functioning graft with no clinical sings or laboratory perturbations. The influence of treatment of subclinical rejection, diagnosed in 3-month protocol biopsy, to graft function and histology 18 months after TX was analyzed in 22 patients and compared to 35 historical control patients. The incidence of subclinical rejection at three months was 43%, and the patients received a standard rejection treatment (a course of increased MP) and/or increased baseline immunosuppression, depending on the severity of rejection and graft function. Glomerular filtration rate (GFR) at 18 months was significantly better in the patients who were screened and treated for subclinical rejection in comparison to the historical patients (86.7±22.5 vs. 67.9±31.9 ml/min/1.73m2, respectively). The improvement was most remarkable in the youngest (<2 years) age group (94.1±11.0 vs. 67.9±26.8 ml/min/1.73m2). Histological findings of chronic allograft nephropathy were also more common in the historical patients in the 18-month protocol biopsy. All pediatric renal TX patients receive MP as a part of the baseline immunosuppression. Although the maintenance dose of MP is very low in the majority of the patients, the well-known steroid-related adverse affects are not uncommon. It has been shown in a previous study in Finnish pediatric TX patients that steroid exposure, measured as area under concentration-time curve (AUC), rather than the dose correlates with the adverse effects. In the present study, MP AUC was measured in sixteen stable maintenance patients, and a correlation with excess weight gain during 12 months after TX as well as with height deficit was found. A novel bioassay measuring the activation of glucocorticoid receptor dependent transcription cascade was also employed to assess the biological effect of MP. Glucocorticoid bioactivity was found to be related to the adverse effects, although the relationship was not as apparent as that with serum MP concentration. The findings in this study support individualized monitoring and adjustment of immunosuppression based on pharmacokinetics, graft function and histology. Pharmacokinetic profiles are helpful in estimating drug exposure and thus identifying the patients who might be at risk for excessive or insufficient immunosuppression. Individualized doses and monitoring of blood concentrations should definitely be employed with CsA, but possibly also with steroids. As an alternative to complete steroid withdrawal, individualized dosing based on drug exposure monitoring might help in avoiding the adverse effects. Early screening and treatment of subclinical immunoactivation is beneficial as it improves the prospects of good long-term graft function.
Resumo:
Background. Hyperlipidemia is a common concern in patients with heterozygous familial hypercholesterolemia (HeFH) and in cardiac transplant recipients. In both groups, an elevated serum LDL cholesterol level accelerates the development of atherosclerotic vascular disease and increases the rates of cardiovascular morbidity and mortality. The purpose of this study is to assess the pharmacokinetics, efficacy, and safety of cholesterol-lowering pravastatin in children with HeFH and in pediatric cardiac transplant recipients receiving immunosuppressive medication. Patients and Methods. The pharmacokinetics of pravastatin was studied in 20 HeFH children and in 19 pediatric cardiac transplant recipients receiving triple immunosuppression. The patients ingested a single 10-mg dose of pravastatin, and plasma pravastatin concentrations were measured up to 10/24 hours. The efficacy and safety of pravastatin (maximum dose 10 to 60 mg/day and 10 mg/day) up to one to two years were studied in 30 patients with HeFH and in 19 cardiac transplant recipients, respectively. In a subgroup of 16 HeFH children, serum non-cholesterol sterol ratios (102 x mmol/mol of cholesterol), surrogate estimates of cholesterol absorption (cholestanol, campesterol, sitosterol), and synthesis (desmosterol and lathosterol) were studied at study baseline (on plant stanol esters) and during combination with pravastatin and plant stanol esters. In the transplant recipients, the lipoprotein levels and their mass compositions were analyzed before and after one year of pravastatin use, and then compared to values measured from 21 healthy pediatric controls. The transplant recipients were grouped into patients with transplant coronary artery disease (TxCAD) and patients without TxCAD, based on annual angiography evaluations before pravastatin. Results. In the cardiac transplant recipients, the mean area under the plasma concentration-time curve of pravastatin [AUC(0-10)], 264.1 * 192.4 ng.h/mL, was nearly ten-fold higher than in the HeFH children (26.6 * 17.0 ng.h/mL). By 2, 4, 6, 12 and 24 months of treatment, the LDL cholesterol levels in the HeFH children had respectively decreased by 25%, 26%, 29%, 33%, and 32%. In the HeFH group, pravastatin treatment increased the markers of cholesterol absorption and decreased those of synthesis. High ratios of cholestanol to cholesterol were associated with the poor cholesterol-lowering efficacy of pravastatin. In cardiac transplant recipients, pravastatin 10 mg/day lowered the LDL cholesterol by approximately 19%. Compared with the patients without TxCAD, patients with TxCAD had significantly lower HDL cholesterol concentrations and higher apoB-100/apoA-I ratios at baseline (1.0 ± 0.3 mmol/L vs. 1.4 ± 0.3 mmol/L, P = 0.031; and 0.7 ± 0.2 vs. 0.5 ± 0.1, P = 0.034) and after one year of pravastatin use (1.0 ± 0.3 mmol/L vs. 1.4 ± 0.3 mmol/L, P = 0.013; and 0.6 ± 0.2 vs. 0.4 ± 0.1, P = 0.005). Compared with healthy controls, the transplant recipients exhibited elevated serum triglycerides at baseline (median 1.3 [range 0.6-3.2] mmol/L vs. 0.7 [0.3-2.4] mmol/L, P=0.0002), which negatively correlated with their HDL cholesterol concentration (r = -0.523, P = 0.022). Recipients also exhibited higher apoB-100/apoA1 ratios (0.6 ± 0.2 vs. 0.4 ± 0.1, P = 0.005). In addition, elevated triglyceride levels were still observed after one year of pravastatin use (1.3 [0.5-3.5] mmol/L vs. 0.7 [0.3-2.4] mmol/L, P = 0.0004). Clinically significant elevations in alanine aminotransferase, creatine kinase, or creatinine ocurred in neither group. Conclusions. Immunosuppressive medication considerably increased the plasma pravastatin concentrations. In both patient groups, pravastatin treatment was moderately effective, safe, and well tolerated. In the HeFH group, high baseline cholesterol absorption seemed to predispose patients to insufficient cholesterol-lowering efficacy of pravastatin. In the cardiac transplant recipients, low HDL cholesterol and a high apoB-100/apoA-I ratio were associated with development of TxCAD. Even though pravastatin in the transplant recipients effectively lowered serum total and LDL cholesterol concentrations, it failed to normalize their elevated triglyceride levels and, in some patients, to prevent the progression of TxCAD.
Resumo:
Painful bladder syndrome/interstitial cystitis (PBS/IC) is a chronic urinary bladder disorder of unknown etiology characterized by symptoms of bladder pain and urinary frequency. PBS/IC is a chronic disease in which drug therapy has not led to significant success over the course of time. If the symptoms of PBS/IC are refractory to standard treatments, a possible cure might demand surgical intervention involving cystectomy. The eventual autoimmune etiology in mind, immunosuppressive drug therapy with cyclosporine A (CyA) was started to patients with refractory PBS/IC. CyA is a potent anti-inflammatory drug, a calcineurin inhibitor which inhibits T lymphocyte IL-2 produc-tion. T cells are present in abundance in inflammation of the bladder in PBS/IC. On the basis of a pilot, short-term study with CyA on PBS/IC, use of CyA was continued empirically over the long term. We conducted a prospective, randomized, six-month study in 64 patients comparing the effect of CyA with the FDA approved treatment, pentosan polysulfate sodium (PPS). We measured the drug effect on patient s symptoms, the potassium sensitivity test, and on urinary biomarkers. We further tested the impact of CyA, PPS, DMSO and BCG therapy on a health-related quality of life questionnaire and evaluated the response rate to treatment with these therapies. Long-term use of CyA was safe and effective in PBS/IC patients. The good clinical effect matured individually during the years in which CyA was continued. Cessation of medication led to the reappearance of symptoms, and restarting CyA to renewed alleviation, so that CyA was administered as continuous medication. The response rate to CyA increased during the study period, comprising 75% of CyA patients at six months. 19% of patients responded to PPS therapy. Adverse effects were more common in the CyA group, underlining the importance of monitoring the drug safety and appropriate titration of the dose. The potassium sensitivity test is positive in the majority of PBS/IC patients. Successful therapy of PBS/IC can alter nerve sensitivity to external potassium. This effect was seen more often after CyA therapy. Successful treatment of PBS/IC with CyA resulted to decreasing urinary levels of EGF. IL-6 levels in urine were higher among older patient with a longer history of PBS/IC. In these patients, reduced levels of urinary IL-6 were measured after CyA therapy. Patients who experience the best treatment response have improved quality of life according to the post-treatment health-related quality of life (HRQOL) questionnaire. CyA had more impact on the ma-jority of the aspects of QoL than PPS. Despite DMSO therapy being more successful than BCG in the count of responders, DMSO and BCG had equal effects on the HRQOL questionnaire.
Resumo:
Wood-degrading fungi are able to degrade a large range of recalcitrant pollutants which resemble the lignin biopolymer. This ability is attributed to the production of lignin-modifying enzymes, which are extracellular and non-specific. Despite the potential of fungi in bioremediation, there is still an understanding gap in terms of the technology. In this thesis, the feasibility of two ex situ fungal bioremediation methods to treat contaminated soil was evaluated. Treatment of polycyclic aromatic hydrocarbons (PAHs)-contaminated marsh soil was studied in a stirred slurry-phase reactor. Due to the salt content in marsh soil, fungi were screened for their halotolerance, and the white-rot fungi Lentinus tigrinus, Irpex lacteus and Bjerkandera adusta were selected for further studies. These fungi degraded 40 - 60% of a PAH mixture (phenanthrene, fluoranthene, pyrene and chrysene) in a slurry-phase reactor (100 ml) during 30 days of incubation. Thereafter, B. adusta was selected to scale-up and optimize the process in a 5 L reactor. Maximum degradation of dibenzothiophene (93%), fluoranthene (82%), pyrene (81%) and chrysene (83%) was achieved with the free mycelium inoculum of the highest initial biomass (2.2 g/l). In autoclaved soil, MnP was the most important enzyme involved in PAH degradation. In non-sterile soil, endogenous soil microbes together with B. adusta also degraded the PAHs extensively, suggesting a synergic action between soil microbes and the fungus. A fungal solid-phase cultivation method to pretreat contaminated sawmill soil with high organic matter content was developed to enhance the effectiveness of the subsequent soil combustion. In a preliminary screening of 146 fungal strains, 28 out of 52 fungi, which extensively colonized non-sterile contaminated soil, were litter-decomposing fungi. The 18 strains further selected were characterized by their production of lignin-modifying and hydrolytic enzymes, of which MnP and endo-1,4-β-glucanase were the main enzymes during cultivation on Scots pine (Pinus sylvestris) bark. Of the six fungi selected for further tests, Gymnopilus luteofolius, Phanerochaete velutina, and Stropharia rugosoannulata were the most active soil organic matter degraders. The results showed that a six-month pretreatment of sawmill soil would result in a 3.5 - 9.5% loss of organic matter, depending on the fungus applied. The pretreatment process was scaled-up for a 0.56 m3 reactor, in which perforated plastic tubes filled with S. rugosoannulata growing on pine bark were introduced into the soil. The fungal pretreatment resulted in a soil mass loss of 30.5 kg, which represents 10% of the original soil mass (308 kg). Despite the fact that Scots pine bark contains several antimicrobial compounds, it was a suitable substrate for fungal growth and promoter of the production of oxidative enzymes, as well as an excellent and cheap natural carrier of fungal mycelium. This thesis successfully developed two novel fungal ex situ bioremediation technologies and introduce new insights for their further full-scale application. Ex situ slurry-phase fungal reactors might be applied in cases when the soil has a high water content or when the contaminant bioavailability is low; for example, in wastewater treatment plants to remove pharmaceutical residues. Fungal solid-phase bioremediation is a promising remediation technology to ex situ or in situ treat contaminated soil.
Resumo:
Tutkimus on jatkoa Ruralia-instituutin syksyllä 2008 julkaisemalle metsäteollisuuden mahdollisen supistumisen aluetaloudellisia vaikutuksia selvittäneelle työlle. Päivitetty tutkimus on edellistä kattavampi, sillä selvitys ulottuu nyt kaikkiin Suomen massa- ja paperiteollisuuden yrityksiin. Massa- ja paperiteollisuuden vuoden 2007 alusta syksyyn 2009 ilmoittamat pysyvät kapasiteetin leikkaukset ovat yhteensä lähes 19,0 % vuoden 2007 tasoon verrattuna. Massan valmistuksessa menetykset ovat yli 20 % ja paperin/kartongin valmistuksessa noin 17 %. Supistamispäätökset merkitsevät kerroinvaikutuksineen yhteensä 1,4 miljardin euron ja 8 800 henkilötyövuoden menetystä. Kun vähennykset suhteutetaan menetyksiä kärsineiden maakuntien yhteenlaskettuun BKT:n ja työllisten määrään, toimenpiteet vastaavat 2,0 % laskua taloudellisessa kasvussa ja 0,8 % menetystä työllisyydessä. Vuoden 2008 lopulla vastaavat menetykset olivat 0,9 miljardia euroa ja 6 000 henkilötyövuotta. Vaikeudet keskittyvät itäiseen Suomeen. Kymenlaakso on kärsinyt suurimmat menetykset rahallisesti ja työllisten määrässä. Suhteellisesti eniten on menettänyt Kainuu. Pohjois-Savo nousee menettäjänä kolmen kärkeen, jos Stora Enson Varkauden tehtaiden sulkeminen toteutuu vuoden 2010 lopussa. Myös Keski-Suomessa, Lapissa ja Pohjanmaalla on suljettu kokonaisia tehtaita ja tästä on tullut selviä vaurioita. Pienemmillä tappioilla ovat selvinneet Etelä-Karjala, Pohjois-Karjala, Pohjois-Pohjanmaa ja Satakunta, joissa on ollut vain tuotannon seisokkeja ja/tai yksittäisiä tuotantolinjojen sulkemisia. Alueet, joiden elinkeinorakenne on monipuolinen, sopeutuvat nopeammin metsäteollisuuden supistumiseen. Tämä näkyy muun muassa pienempinä työllisyysmenetyksinä. Lisäksi aluetalouden koolla on merkitystä. Pienille aluetalouksille vaikutukset ovat voimakkaammat. Tutkimuksessa tarkasteltiin myös metsäteollisuuden rakennemuutoksen etenemisen vaikutuksia. Jos kehitys jatkuu negatiivisena, kuten Metla esitti keväällä 2009 tai pelätty kauhuskenaario alan tuotannon puoliintumisesta toteutuisi, kansantalouden menetykset olisivat merkittävät. Metlan skenaariossa BKT:n vähentyminen olisi kumulatiivisesti laskettuna suurimmillaan vuosien 2009 ja 2013 aikana, noin 450 miljoonaa euroa. Vastaavasti kauhuskenaariossa summa olisi lähes 700 miljoonaa euroa. Menetystä korostaa talouslama, minkä vuoksi muut alat eivät pysty täysin kompensoimaan massa- ja paperiteollisuuden vaikeuksia. Kokonaistuotanto näyttäisi kuitenkin toipuvan, mikäli nykyinen lama jäisi lyhyeksi. Työllisyys kärsisi huomattavasti kummassakin negatiivisessa skenaariossa. Vastapainoksi tutkittiin myös ns. sisäisen devalvaation sekä paperimarkkinoiden elpymisen vaikutusta. Sisäinen devalvaatio toteutuisi massa- ja paperiteollisuuden maltillisen palkkaratkaisun ja Ruotsin kruunun devalvoitumista vastaavan kustannusedun kautta. Kansainvälisten paperimarkkinoiden elpymisen vaikutusta tutkittiin vientihintoja ja -määriä nostamalla. Näistä tehokkaimmaksi osoittautui vientihintojen nousu. Myönteisten skenaarioiden vaikutusten ja nykyisten päätösten aiheuttamien aluetaloudellisten menetysten vertaaminen kuitenkin paljastaa, että vieläkin parempaa kehitystä vaaditaan. Kymenlaakso ja Pohjois-Savo jäisivät edelleen miinukselle.